From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:33:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:33:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414544.658843 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUC-0000Fx-2s; Sat, 01 Oct 2022 15:33:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414544.658843; Sat, 01 Oct 2022 15:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUB-0000Fn-W8; Sat, 01 Oct 2022 15:33:03 +0000
Received: by outflank-mailman (input) for mailman id 414544;
 Sat, 01 Oct 2022 15:33:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUA-0000Fh-Eu
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUA-0008J8-D3
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUA-0003QQ-Bw
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=gjOl1g+ouaToamzHGkVXtYAzrgSLbIO0VgdtuUEaxtI=; b=aB2xcdFdU7cuZaOqqCn7XLu6yu
	bY5nFrL+dkEYFCUNcvktXajO6yQpb0ADWFesGfDKqLX1sciFZ8Gp85+FuxmXnlIlTKhLQAElevnGP
	C46p9a6a6PIDiOonMJ17W/ZyKWOVaNcG9YGccQaTzWmzbcU/Qkc0UwcIIA3IUa24xCzQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools: remove xenstore entries on vchan server closure
Message-Id: <E1oeeUA-0003QQ-Bw@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:33:02 +0000

commit 3ab6ea992b0e5e1a332bdbc8ae56d72f1b66fcbd
Author:     Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
AuthorDate: Thu Sep 29 14:38:02 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Sep 29 14:38:02 2022 +0200

    tools: remove xenstore entries on vchan server closure
    
    vchan server creates XenStore entries to advertise its event channel and
    ring, but those are not removed after the server quits.
    Add additional cleanup step, so those are removed, so clients do not try
    to connect to a non-existing server.
    
    Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
    Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxenvchan.h |  5 +++++
 tools/libs/vchan/init.c     | 24 ++++++++++++++++++++++++
 tools/libs/vchan/io.c       |  4 ++++
 tools/libs/vchan/vchan.h    | 31 +++++++++++++++++++++++++++++++
 4 files changed, 64 insertions(+)

diff --git a/tools/include/libxenvchan.h b/tools/include/libxenvchan.h
index d6010b145d..30cc73cf97 100644
--- a/tools/include/libxenvchan.h
+++ b/tools/include/libxenvchan.h
@@ -86,6 +86,11 @@ struct libxenvchan {
 	int blocking:1;
 	/* communication rings */
 	struct libxenvchan_ring read, write;
+	/**
+	 * Base xenstore path for storing ring/event data used by the server
+	 * during cleanup.
+	 * */
+	char *xs_path;
 };
 
 /**
diff --git a/tools/libs/vchan/init.c b/tools/libs/vchan/init.c
index c8510e6ce9..9195bd3b98 100644
--- a/tools/libs/vchan/init.c
+++ b/tools/libs/vchan/init.c
@@ -46,6 +46,8 @@
 #include <xen/sys/gntdev.h>
 #include <libxenvchan.h>
 
+#include "vchan.h"
+
 #ifndef PAGE_SHIFT
 #define PAGE_SHIFT 12
 #endif
@@ -251,6 +253,12 @@ static int init_xs_srv(struct libxenvchan *ctrl, int domain, const char* xs_base
 	char ref[16];
 	char* domid_str = NULL;
 	xs_transaction_t xs_trans = XBT_NULL;
+
+	/* store the base path so we can clean up on server closure */
+	ctrl->xs_path = strdup(xs_base);
+	if (!ctrl->xs_path)
+		return -1; 
+
 	xs = xs_open(0);
 	if (!xs)
 		goto fail;
@@ -298,6 +306,22 @@ retry_transaction:
 	return ret;
 }
 
+void close_xs_srv(struct libxenvchan *ctrl)
+{
+	struct xs_handle *xs;
+
+	if (!ctrl->xs_path)
+		return;
+
+	xs = xs_open(0);
+	if (xs) {
+		xs_rm(xs, XBT_NULL, ctrl->xs_path);
+		xs_close(xs);
+	}
+
+	free(ctrl->xs_path);
+}
+
 static int min_order(size_t siz)
 {
 	int rv = PAGE_SHIFT;
diff --git a/tools/libs/vchan/io.c b/tools/libs/vchan/io.c
index da303fbc01..1f201ad554 100644
--- a/tools/libs/vchan/io.c
+++ b/tools/libs/vchan/io.c
@@ -40,6 +40,8 @@
 #include <xenctrl.h>
 #include <libxenvchan.h>
 
+#include "vchan.h"
+
 #ifndef PAGE_SHIFT
 #define PAGE_SHIFT 12
 #endif
@@ -384,5 +386,7 @@ void libxenvchan_close(struct libxenvchan *ctrl)
 		if (ctrl->gnttab)
 			xengnttab_close(ctrl->gnttab);
 	}
+	if (ctrl->is_server)
+		close_xs_srv(ctrl);
 	free(ctrl);
 }
diff --git a/tools/libs/vchan/vchan.h b/tools/libs/vchan/vchan.h
new file mode 100644
index 0000000000..621016ef42
--- /dev/null
+++ b/tools/libs/vchan/vchan.h
@@ -0,0 +1,31 @@
+/**
+ * @file
+ * @section AUTHORS
+ *
+ * Copyright (C) 2021 EPAM Systems Inc.
+ *
+ * @section LICENSE
+ *
+ *  This library is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU Lesser General Public
+ *  License as published by the Free Software Foundation; either
+ *  version 2.1 of the License, or (at your option) any later version.
+ *
+ *  This library is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  Lesser General Public License for more details.
+ *
+ *  You should have received a copy of the GNU Lesser General Public
+ *  License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * @section DESCRIPTION
+ *
+ *  This file contains common libxenvchan declarations.
+ */
+#ifndef LIBVCHAN_H
+#define LIBVCHAN_H
+
+void close_xs_srv(struct libxenvchan *ctrl);
+
+#endif /* LIBVCHAN_H */
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:33:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:33:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414545.658847 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUM-0000Hc-3t; Sat, 01 Oct 2022 15:33:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414545.658847; Sat, 01 Oct 2022 15:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUM-0000HU-1Q; Sat, 01 Oct 2022 15:33:14 +0000
Received: by outflank-mailman (input) for mailman id 414545;
 Sat, 01 Oct 2022 15:33:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUK-0000HK-IU
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUK-0008JU-Hi
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUK-0003Qu-FE
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rcdfZC0SFl+2chok62JSmZVRMzo7hbwrK7UlZNIXf2Q=; b=GNhoMffQrPRXTlZVBZbIA2LwGM
	pZI+pn7rwrB+LWEGye6MKvaQl5KHkX1eb5U2sGK7fSJm7MQOFQpkjv+u0SJ1+fgSpnToxh6VpLDYc
	WVFmdtJ6gycAUedMLYZK0vC0CQ+3JuC64xxpjzdwrfL3Y1cDQZwp7Pk7GDvbWAk6bHxU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] MAINTAINERS: ARINC 653 scheduler maintainer updates
Message-Id: <E1oeeUK-0003Qu-FE@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:33:12 +0000

commit e1de23b7c1bfa02447a79733e64184b3635e0587
Author:     Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
AuthorDate: Thu Sep 29 14:38:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Sep 29 14:38:22 2022 +0200

    MAINTAINERS: ARINC 653 scheduler maintainer updates
    
    Add Nathan Studer as co-maintainer.
    
    I am departing DornerWorks. I will still be working with Xen in my next
    role, and I still have an interest in co-maintaining the ARINC 653
    scheduler, so change to my personal email address.
    
    Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
    Acked-by: Nathan Studer <nathan.studer@dornerworks.com>
---
 MAINTAINERS | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index e12c499a28..816656950a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -221,7 +221,8 @@ F:	xen/include/xen/argo.h
 F:	xen/common/argo.c
 
 ARINC653 SCHEDULER
-M:	Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
+M:	Nathan Studer <nathan.studer@dornerworks.com>
+M:	Stewart Hildebrand <stewart@stew.dk>
 S:	Supported
 L:	xen-devel@dornerworks.com
 F:	xen/common/sched/arinc653.c
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:33:24 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:33:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414546.658850 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUW-0000Js-5Q; Sat, 01 Oct 2022 15:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414546.658850; Sat, 01 Oct 2022 15:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUW-0000Jk-2r; Sat, 01 Oct 2022 15:33:24 +0000
Received: by outflank-mailman (input) for mailman id 414546;
 Sat, 01 Oct 2022 15:33:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUU-0000JU-LP
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUU-0008Jl-Kn
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUU-0003RN-Jr
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4RYg4mtGQCWbXOkEzb3Rmf7+Gr8GHZw+hz/n0Wcp+3k=; b=7EJS7h5fhdvTSyI0O2a7SkMNsq
	MT9ksMqlmWGv5kKoA3QlDsqLs6vpocopxqCqyb2FmmPSRWPx1jsNZ9pVimde9mrwTO7VUx10KqP6J
	YuVjXJGSpOG56D6U2iSUEllSzaX1F0SxVT+s3vuofUgr+WtF650RPoXwzkhH8g4ILh48=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/NUMA: correct memnode_shift calculation for single node system
Message-Id: <E1oeeUU-0003RN-Jr@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:33:22 +0000

commit 0db195c1a9947240b354abbefd2afac6c73ad6a8
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Thu Sep 29 14:39:52 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Sep 29 14:39:52 2022 +0200

    x86/NUMA: correct memnode_shift calculation for single node system
    
    SRAT may describe even a single node system (including such with
    multiple nodes, but only one having any memory) using multiple ranges.
    Hence simply counting the number of ranges (note that function
    parameters are mis-named) is not an indication of the number of nodes in
    use. Since we only care about knowing whether we're on a single node
    system, accounting for this is easy: Increment the local variable only
    when adjacent ranges are for different nodes. That way the count may
    still end up larger than the number of nodes in use, but it won't be
    larger than 1 when only a single node has any memory.
    
    To compensate populate_memnodemap() now needs to be prepared to find
    the correct node ID already in place for a range. (This could of course
    also happen when there's more than one node with memory, while at least
    one node has multiple adjacent ranges, provided extract_lsb_from_nodes()
    would also know to recognize this case.)
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/numa.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 627ae8aa95..1bc82c60aa 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -78,7 +78,8 @@ static int __init populate_memnodemap(const struct node *nodes,
         if ( (epdx >> shift) >= memnodemapsize )
             return 0;
         do {
-            if ( memnodemap[spdx >> shift] != NUMA_NO_NODE )
+            if ( memnodemap[spdx >> shift] != NUMA_NO_NODE &&
+                 (!nodeids || memnodemap[spdx >> shift] != nodeids[i]) )
                 return -1;
 
             if ( !nodeids )
@@ -114,7 +115,7 @@ static int __init allocate_cachealigned_memnodemap(void)
  * maximum possible shift.
  */
 static int __init extract_lsb_from_nodes(const struct node *nodes,
-                                         int numnodes)
+                                         int numnodes, const nodeid_t *nodeids)
 {
     int i, nodes_used = 0;
     unsigned long spdx, epdx;
@@ -127,7 +128,8 @@ static int __init extract_lsb_from_nodes(const struct node *nodes,
         if ( spdx >= epdx )
             continue;
         bitfield |= spdx;
-        nodes_used++;
+        if ( !i || !nodeids || nodeids[i - 1] != nodeids[i] )
+            nodes_used++;
         if ( epdx > memtop )
             memtop = epdx;
     }
@@ -144,7 +146,7 @@ int __init compute_hash_shift(struct node *nodes, int numnodes,
 {
     int shift;
 
-    shift = extract_lsb_from_nodes(nodes, numnodes);
+    shift = extract_lsb_from_nodes(nodes, numnodes, nodeids);
     if ( memnodemapsize <= ARRAY_SIZE(_memnodemap) )
         memnodemap = _memnodemap;
     else if ( allocate_cachealigned_memnodemap() )
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:33:34 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414547.658855 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUg-0000N6-74; Sat, 01 Oct 2022 15:33:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414547.658855; Sat, 01 Oct 2022 15:33:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUg-0000My-4R; Sat, 01 Oct 2022 15:33:34 +0000
Received: by outflank-mailman (input) for mailman id 414547;
 Sat, 01 Oct 2022 15:33:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUe-0000Ml-Or
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUe-0008Jv-O2
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUe-0003Rr-NC
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=MlMjbEwUvue5QGNGiGB0ZlaXe4RcxawimbBBplXxYas=; b=FLZRH4DhEyY6Dg1jM4xyQigcRk
	7hoD3laH0c+0GBGaY1EtN6cnGBwHx9IS1dPxR6XXp6ESKLWXX/iXgCRigC0BgDeB1vHFPGp9+ZUuF
	oSJUc1+z3/wVFyEN+Ac3ft6xBr3ztG0TaVRAVqjSGxWUNyjCC+Rwy9HUjTw26/YONcnU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] arm/vgic: drop const attribute from gic_iomem_deny_access()
Message-Id: <E1oeeUe-0003Rr-NC@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:33:32 +0000

commit 9982fe275ba4ee1a749b6dde5602a5a79e42b543
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Sep 29 14:41:13 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Sep 29 14:41:13 2022 +0200

    arm/vgic: drop const attribute from gic_iomem_deny_access()
    
    While correct from a code point of view, the usage of the const
    attribute for the domain parameter of gic_iomem_deny_access() is at
    least partially bogus.  Contents of the domain structure (the iomem
    rangeset) is modified by the function.  Such modifications succeed
    because right now the iomem rangeset is allocated separately from
    struct domain, and hence is not subject to the constness of struct
    domain.
    
    Amend this by dropping the const attribute from the function
    parameter.
    
    This is required by further changes that will convert
    iomem_{permit,deny}_access into a function.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/gic-v2.c          | 2 +-
 xen/arch/arm/gic-v3.c          | 2 +-
 xen/arch/arm/gic.c             | 2 +-
 xen/arch/arm/include/asm/gic.h | 4 ++--
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index bd773bcc67..ae5bd8e95f 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1083,7 +1083,7 @@ static void __init gicv2_dt_init(void)
     gicv2_extension_dt_init(node);
 }
 
-static int gicv2_iomem_deny_access(const struct domain *d)
+static int gicv2_iomem_deny_access(struct domain *d)
 {
     int rc;
     unsigned long mfn, nr;
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 64b36cec25..018fa0dfa0 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1424,7 +1424,7 @@ static void __init gicv3_dt_init(void)
                               &vbase, &vsize);
 }
 
-static int gicv3_iomem_deny_access(const struct domain *d)
+static int gicv3_iomem_deny_access(struct domain *d)
 {
     int rc, i;
     unsigned long mfn, nr;
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 3b0331b538..9b82325442 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -462,7 +462,7 @@ unsigned long gic_get_hwdom_madt_size(const struct domain *d)
 }
 #endif
 
-int gic_iomem_deny_access(const struct domain *d)
+int gic_iomem_deny_access(struct domain *d)
 {
     return gic_hw_ops->iomem_deny_access(d);
 }
diff --git a/xen/arch/arm/include/asm/gic.h b/xen/arch/arm/include/asm/gic.h
index 3692fae393..76e3fa5dc4 100644
--- a/xen/arch/arm/include/asm/gic.h
+++ b/xen/arch/arm/include/asm/gic.h
@@ -392,7 +392,7 @@ struct gic_hw_operations {
     /* Map extra GIC MMIO, irqs and other hw stuffs to the hardware domain. */
     int (*map_hwdom_extra_mappings)(struct domain *d);
     /* Deny access to GIC regions */
-    int (*iomem_deny_access)(const struct domain *d);
+    int (*iomem_deny_access)(struct domain *d);
     /* Handle LPIs, which require special handling */
     void (*do_LPI)(unsigned int lpi);
 };
@@ -449,7 +449,7 @@ unsigned long gic_get_hwdom_madt_size(const struct domain *d);
 #endif
 
 int gic_map_hwdom_extra_mappings(struct domain *d);
-int gic_iomem_deny_access(const struct domain *d);
+int gic_iomem_deny_access(struct domain *d);
 
 #endif /* __ASSEMBLY__ */
 #endif
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:33:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:33:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414550.658870 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUq-0000f4-GT; Sat, 01 Oct 2022 15:33:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414550.658870; Sat, 01 Oct 2022 15:33:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeUq-0000ew-DC; Sat, 01 Oct 2022 15:33:44 +0000
Received: by outflank-mailman (input) for mailman id 414550;
 Sat, 01 Oct 2022 15:33:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUo-0000b5-S9
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUo-0008KD-RR
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUo-0003SM-QJ
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=dx+3ybpH3FxpFyAQ8r2wPPei/x949XulpXSmb3Yy4so=; b=rgfLIHiSTr799wQoOaiT6tkbl6
	wUl0aScfHm+nY4B+pP+YPR3glO7UtsM+fbWLGl6OX7BFDrVhcW257zeOd51GcgSW2RqIG6fRyILOX
	W2XkeW5Bd1CKiDEKiu342npYAVK5MOJeBsJ8YPI4jZRdR001ua0fy2iOVBubmFaaaiQA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/ept: limit calls to memory_type_changed()
Message-Id: <E1oeeUo-0003SM-QJ@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:33:42 +0000

commit c4e5cc2ccc5b8274d02f7855c4769839989bb349
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Sep 29 14:44:10 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Sep 29 14:44:10 2022 +0200

    x86/ept: limit calls to memory_type_changed()
    
    memory_type_changed() is currently only implemented for Intel EPT, and
    results in the invalidation of EMT attributes on all the entries in
    the EPT page tables.  Such invalidation causes EPT_MISCONFIG vmexits
    when the guest tries to access any gfns for the first time, which
    results in the recalculation of the EMT for the accessed page.  The
    vmexit and the recalculations are expensive, and as such should be
    avoided when possible.
    
    Remove the call to memory_type_changed() from
    XEN_DOMCTL_memory_mapping: there are no modifications of the
    iomem_caps ranges anymore that could alter the return of
    cache_flush_permitted() from that domctl.
    
    Encapsulate calls to memory_type_changed() resulting from changes to
    the domain iomem_caps or ioport_caps ranges in the helpers themselves
    (io{ports,mem}_{permit,deny}_access()), and add a note in
    epte_get_entry_emt() to remind that changes to the logic there likely
    need to be propagaed to the IO capabilities helpers.
    
    Note changes to the IO ports or memory ranges are not very common
    during guest runtime, but Citrix Hypervisor has an use case for them
    related to device passthrough.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/domctl.c            |  4 ----
 xen/arch/x86/include/asm/iocap.h | 34 ++++++++++++++++++++++++++++++----
 xen/arch/x86/mm/p2m-ept.c        |  4 ++++
 xen/common/domctl.c              |  4 ----
 xen/include/xen/iocap.h          | 39 +++++++++++++++++++++++++++++++++++----
 5 files changed, 69 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 020df615bd..e9bfbc57a7 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -232,8 +232,6 @@ long arch_do_domctl(
             ret = ioports_permit_access(d, fp, fp + np - 1);
         else
             ret = ioports_deny_access(d, fp, fp + np - 1);
-        if ( !ret )
-            memory_type_changed(d);
         break;
     }
 
@@ -666,8 +664,6 @@ long arch_do_domctl(
                        "ioport_map: error %ld denying dom%d access to [%x,%x]\n",
                        ret, d->domain_id, fmp, fmp + np - 1);
         }
-        if ( !ret )
-            memory_type_changed(d);
         break;
     }
 
diff --git a/xen/arch/x86/include/asm/iocap.h b/xen/arch/x86/include/asm/iocap.h
index eee47228d4..53d87ae8a3 100644
--- a/xen/arch/x86/include/asm/iocap.h
+++ b/xen/arch/x86/include/asm/iocap.h
@@ -7,10 +7,11 @@
 #ifndef __X86_IOCAP_H__
 #define __X86_IOCAP_H__
 
-#define ioports_permit_access(d, s, e)                  \
-    rangeset_add_range((d)->arch.ioport_caps, s, e)
-#define ioports_deny_access(d, s, e)                    \
-    rangeset_remove_range((d)->arch.ioport_caps, s, e)
+#include <xen/sched.h>
+#include <xen/rangeset.h>
+
+#include <asm/p2m.h>
+
 #define ioports_access_permitted(d, s, e)               \
     rangeset_contains_range((d)->arch.ioport_caps, s, e)
 
@@ -18,4 +19,29 @@
     (!rangeset_is_empty((d)->iomem_caps) ||             \
      !rangeset_is_empty((d)->arch.ioport_caps))
 
+static inline int ioports_permit_access(struct domain *d, unsigned long s,
+                                        unsigned long e)
+{
+    bool flush = cache_flush_permitted(d);
+    int ret = rangeset_add_range(d->arch.ioport_caps, s, e);
+
+    if ( !ret && !is_iommu_enabled(d) && !flush )
+        /* See comment in iomem_permit_access(). */
+        memory_type_changed(d);
+
+    return ret;
+}
+
+static inline int ioports_deny_access(struct domain *d, unsigned long s,
+                                      unsigned long e)
+{
+    int ret = rangeset_remove_range(d->arch.ioport_caps, s, e);
+
+    if ( !ret && !is_iommu_enabled(d) && !cache_flush_permitted(d) )
+        /* See comment in iomem_deny_access(). */
+        memory_type_changed(d);
+
+    return ret;
+}
+
 #endif /* __X86_IOCAP_H__ */
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b4919bad51..d61d66c20e 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -518,6 +518,10 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
         return MTRR_TYPE_UNCACHABLE;
     }
 
+    /*
+     * Conditional must be kept in sync with the code in
+     * {iomem,ioports}_{permit,deny}_access().
+     */
     if ( type != p2m_mmio_direct && !is_iommu_enabled(d) &&
          !cache_flush_permitted(d) )
     {
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 452266710a..69fb9abd34 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -716,8 +716,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
         else
             ret = iomem_deny_access(d, mfn, mfn + nr_mfns - 1);
-        if ( !ret )
-            memory_type_changed(d);
         break;
     }
 
@@ -778,8 +776,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                        "memory_map: error %ld removing dom%d access to [%lx,%lx]\n",
                        ret, d->domain_id, mfn, mfn_end);
         }
-        /* Do this unconditionally to cover errors on above failure paths. */
-        memory_type_changed(d);
         break;
     }
 
diff --git a/xen/include/xen/iocap.h b/xen/include/xen/iocap.h
index 1ca3858fc0..ffbc48b60f 100644
--- a/xen/include/xen/iocap.h
+++ b/xen/include/xen/iocap.h
@@ -7,13 +7,44 @@
 #ifndef __XEN_IOCAP_H__
 #define __XEN_IOCAP_H__
 
+#include <xen/sched.h>
 #include <xen/rangeset.h>
 #include <asm/iocap.h>
+#include <asm/p2m.h>
+
+static inline int iomem_permit_access(struct domain *d, unsigned long s,
+                                      unsigned long e)
+{
+    bool flush = cache_flush_permitted(d);
+    int ret = rangeset_add_range(d->iomem_caps, s, e);
+
+    if ( !ret && !is_iommu_enabled(d) && !flush )
+        /*
+         * Only flush if the range(s) are empty before this addition and
+         * IOMMU is not enabled for the domain, otherwise it makes no
+         * difference for effective cache attribute calculation purposes.
+         */
+        memory_type_changed(d);
+
+    return ret;
+}
+
+static inline int iomem_deny_access(struct domain *d, unsigned long s,
+                                    unsigned long e)
+{
+    int ret = rangeset_remove_range(d->iomem_caps, s, e);
+
+    if ( !ret && !is_iommu_enabled(d) && !cache_flush_permitted(d) )
+        /*
+         * Only flush if the range(s) are empty after this removal and
+         * IOMMU is not enabled for the domain, otherwise it makes no
+         * difference for effective cache attribute calculation purposes.
+         */
+        memory_type_changed(d);
+
+    return ret;
+}
 
-#define iomem_permit_access(d, s, e)                    \
-    rangeset_add_range((d)->iomem_caps, s, e)
-#define iomem_deny_access(d, s, e)                      \
-    rangeset_remove_range((d)->iomem_caps, s, e)
 #define iomem_access_permitted(d, s, e)                 \
     rangeset_contains_range((d)->iomem_caps, s, e)
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:33:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:33:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414552.658874 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeV0-0000kp-I4; Sat, 01 Oct 2022 15:33:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414552.658874; Sat, 01 Oct 2022 15:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeV0-0000ki-F9; Sat, 01 Oct 2022 15:33:54 +0000
Received: by outflank-mailman (input) for mailman id 414552;
 Sat, 01 Oct 2022 15:33:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUy-0000k4-Uu
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUy-0008KO-UC
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:52 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeUy-0003Sr-Ta
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:33:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=7/uhjhrvfl5EtKcT/uMkBZSLFEY7nT7t0kOpx1U1hjQ=; b=X368x8lQzEk7/O1htVeJZLk5OZ
	a7uFif+bZzaSAzqcTLqxzS4Rb69i59QoEFVnL7ECXRwyfapsCPukigN7sec9UVki93RBTOthRGarS
	d0fw1txE1xaZFo9aoy6Y/odC4STicE7u7IDNyrOW/YKXBkwLzTTfvWawoBxc3LRQP6BQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86: re-connect VCPUOP_send_nmi for 32-bit guests
Message-Id: <E1oeeUy-0003Sr-Ta@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:33:52 +0000

commit 9214da34a3cb017ff0417900250bd6d18ca89e15
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Thu Sep 29 14:46:50 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Sep 29 14:46:50 2022 +0200

    x86: re-connect VCPUOP_send_nmi for 32-bit guests
    
    With the "inversion" of VCPUOP handling, processing arch-specific ones
    first, the forwarding of this sub-op from the (common) compat handler to
    (common) non-compat one did no longer have the intended effect. It now
    needs forwarding between the arch-specific handlers.
    
    Fixes: 8a96c0ea7999 ("xen: move do_vcpu_op() to arch specific code")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/x86_64/domain.c | 1 +
 xen/common/compat/domain.c   | 1 -
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index 62fe51ee74..9b2f7a7d7a 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -58,6 +58,7 @@ compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case VCPUOP_send_nmi:
     case VCPUOP_get_physid:
         rc = do_vcpu_op(cmd, vcpuid, arg);
         break;
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 1119534679..c425490535 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -99,7 +99,6 @@ int compat_common_vcpu_op(int cmd, struct vcpu *v,
     case VCPUOP_stop_periodic_timer:
     case VCPUOP_stop_singleshot_timer:
     case VCPUOP_register_vcpu_info:
-    case VCPUOP_send_nmi:
         rc = common_vcpu_op(cmd, v, arg);
         break;
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:34:03 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:34:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414553.658880 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeV9-0000qy-MN; Sat, 01 Oct 2022 15:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414553.658880; Sat, 01 Oct 2022 15:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeV9-0000qp-Hy; Sat, 01 Oct 2022 15:34:03 +0000
Received: by outflank-mailman (input) for mailman id 414553;
 Sat, 01 Oct 2022 15:34:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeV9-0000qf-1c
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeV9-0008Kl-0o
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeV9-0003TR-00
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ZcKmlUwEFL3SpmK8KjkrhINwKsFRAQC6zccjvNKQ4E4=; b=5T9d2PaNi2E57IuMQWKHk4NaAo
	4KimeJO6mF2oFymaNSwz6pAXcCHRwTzg3DjOouVkdkv/8Xt20S7rbFEmWrf1AdXQzLC1X4nEccwFn
	ks6h/kuy4MF/DylD6gSLoKr36UVioALKG6WMwCT1A7ZcST2YMoDHMhb6QoCcU6mv8mEw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
Message-Id: <E1oeeV9-0003TR-00@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:34:03 +0000

commit b726541d94bd0a80b5864d17a2cd2e6d73a3fe0a
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Thu Sep 29 14:47:45 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Sep 29 14:47:45 2022 +0200

    x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
    
    Forever sinced its introduction VCPUOP_register_vcpu_time_memory_area
    was available only to native domains. Linux, for example, would attempt
    to use it irrespective of guest bitness (including in its so called
    PVHVM mode) as long as it finds XEN_PVCLOCK_TSC_STABLE_BIT set (which we
    set only for clocksource=tsc, which in turn needs engaging via command
    line option).
    
    Fixes: a5d39947cb89 ("Allow guests to register secondary vcpu_time_info")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/x86_64/domain.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index 9b2f7a7d7a..bfaea17fe7 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -58,6 +58,26 @@ compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case VCPUOP_register_vcpu_time_memory_area:
+    {
+        struct compat_vcpu_register_time_memory_area area = { .addr.p = 0 };
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.h, arg, 1) )
+            break;
+
+        if ( area.addr.h.c != area.addr.p ||
+             !compat_handle_okay(area.addr.h, 1) )
+            break;
+
+        rc = 0;
+        guest_from_compat_handle(v->arch.time_info_guest, area.addr.h);
+
+        force_update_vcpu_system_time(v);
+
+        break;
+    }
+
     case VCPUOP_send_nmi:
     case VCPUOP_get_physid:
         rc = do_vcpu_op(cmd, vcpuid, arg);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:34:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:34:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414557.658882 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVJ-0000zA-ME; Sat, 01 Oct 2022 15:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414557.658882; Sat, 01 Oct 2022 15:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVJ-0000z1-JT; Sat, 01 Oct 2022 15:34:13 +0000
Received: by outflank-mailman (input) for mailman id 414557;
 Sat, 01 Oct 2022 15:34:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVJ-0000ys-53
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVJ-0008LE-4O
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVJ-0003U8-38
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=0DfA+aXmib1w7fIZLQ0hf73h7fa9EsF8qFGSY+WQ5m8=; b=FoQWZ+2Fzh4iRp9FT0LUdka2dw
	VuyBjeIe126NinD/Qy5jZtnzrk+HEtInF3VZ9NW+eSTPzIShTTOOfCv3n04/6fdo5pXFCWGgbq0Ad
	yFHkFjtcytD8CaltAjm5N5kM+vCWKsTd0WWiLcZxG7jzRzy6kc7TSHDgXEABYCahn9Zg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: domain_build: Always print the static shared memory region
Message-Id: <E1oeeVJ-0003U8-38@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:34:13 +0000

commit a210e94af38a957fcc99db01d2cfcc3039859445
Author:     Michal Orzel <michal.orzel@amd.com>
AuthorDate: Mon Sep 19 20:37:37 2022 +0200
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Thu Sep 29 08:52:56 2022 -0700

    xen/arm: domain_build: Always print the static shared memory region
    
    At the moment, the information about allocating static shared memory
    region is only printed during the debug build. This information can also
    be helpful for the end user (which may not be the same as the person
    building the package), so switch to printk(). Also drop XENLOG_INFO to be
    consistent with other printk() used to print the domain information.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/domain_build.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 01c2aaccd8..40e3c2e119 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -844,9 +844,8 @@ static int __init assign_shared_memory(struct domain *d,
     unsigned long nr_pages, nr_borrowers, i;
     struct page_info *page;
 
-    dprintk(XENLOG_INFO,
-            "%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
-            d, pbase, pbase + psize);
+    printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
+           d, pbase, pbase + psize);
 
     smfn = acquire_shared_memory_bank(d, pbase, psize);
     if ( mfn_eq(smfn, INVALID_MFN) )
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:34:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:34:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414559.658886 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVT-000158-Ny; Sat, 01 Oct 2022 15:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414559.658886; Sat, 01 Oct 2022 15:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVT-00014y-Kv; Sat, 01 Oct 2022 15:34:23 +0000
Received: by outflank-mailman (input) for mailman id 414559;
 Sat, 01 Oct 2022 15:34:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVT-00014o-7k
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVT-0008LU-78
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVT-0003VE-6X
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=SmokZjUvfP3lyDAes+ku/W4UxQOk7UXo5oXMEJBkpYo=; b=X6W5s22fqwzt31b5j5Ed66raem
	zk7mGmdBtV2EknscGoNPpEECcSTgToCTQw9hg+aFXMlAkGbdBbQCjI3nIMPJJUv4gdB7+4YB1W0pA
	SWKTWlQoViwlIiArbxRoiSRYNj7p5LWDExP0L+st3Wbz4ZdlfLXUf4ojKjNeBXCXHqU4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] automation: Information about running containers for a different arch
Message-Id: <E1oeeVT-0003VE-6X@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:34:23 +0000

commit fb7485788fd7db3b95f4e7fc9bfdfe9ef38e383f
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Sep 29 10:51:31 2022 +0100
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Thu Sep 29 08:53:58 2022 -0700

    automation: Information about running containers for a different arch
    
    Adding pointer to 'qemu-user-static'.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 automation/build/README.md | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/automation/build/README.md b/automation/build/README.md
index 00305eed03..4cc1acb6b4 100644
--- a/automation/build/README.md
+++ b/automation/build/README.md
@@ -102,3 +102,16 @@ make -C automation/build suse/opensuse-tumbleweed PUSH=1
 
 [registry]: https://gitlab.com/xen-project/xen/container_registry
 [registry help]: https://gitlab.com/help/user/project/container_registry
+
+
+Building/Running container for a different architecture
+-------------------------------------------------------
+
+On a x86 host, it is possible to build and run containers for other arch (like
+running a container made for Arm) with docker taking care of running the
+appropriate software to emulate that arch. For this, simply install the package
+`qemu-user-static`, and that's it. Then you can start an Arm container on x86
+host like you would start an x86 container.
+
+If that doesn't work, you might find some information on
+[multiarch/qemu-user-static](https://github.com/multiarch/qemu-user-static).
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:34:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:34:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414560.658890 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVd-0001A5-PI; Sat, 01 Oct 2022 15:34:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414560.658890; Sat, 01 Oct 2022 15:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVd-00019x-MQ; Sat, 01 Oct 2022 15:34:33 +0000
Received: by outflank-mailman (input) for mailman id 414560;
 Sat, 01 Oct 2022 15:34:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVd-00019o-B5
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVd-0008Le-9x
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVd-0003W2-9H
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=hI//LkEys8GhA2K/+82bvf0X1tjtNxkeH1yC5gtoqxI=; b=A2y3Kiz/VUTs6klAcZ4Fd9phI6
	9lFEujyMUOm7eWfgso6FhaR8lVxh2YDHyvQYuLsDpviRQC7ITqvrdOHkPoJJnA8aq7jXpGeYkz3WT
	ougc9ee4v91JgfGB/YXfSLRo6AhH4xt7cM+lhToD0x+zUSb1lCCy0IW9NTlhNwHAOFl4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/vpmu: Fix race-condition in vpmu_load
Message-Id: <E1oeeVd-0003W2-9H@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:34:33 +0000

commit defa4e51d20a143bdd4395a075bf0933bb38a9a4
Author:     Tamas K Lengyel <tamas.lengyel@intel.com>
AuthorDate: Fri Sep 30 09:53:49 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Sep 30 09:53:49 2022 +0200

    x86/vpmu: Fix race-condition in vpmu_load
    
    The vPMU code-bases attempts to perform an optimization on saving/reloading the
    PMU context by keeping track of what vCPU ran on each pCPU. When a pCPU is
    getting scheduled, checks if the previous vCPU isn't the current one. If so,
    attempts a call to vpmu_save_force. Unfortunately if the previous vCPU is
    already getting scheduled to run on another pCPU its state will be already
    runnable, which results in an ASSERT failure.
    
    Fix this by always performing a pmu context save in vpmu_save when called from
    vpmu_switch_from, and do a vpmu_load when called from vpmu_switch_to.
    
    While this presents a minimal overhead in case the same vCPU is getting
    rescheduled on the same pCPU, the ASSERT failure is avoided and the code is a
    lot easier to reason about.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/cpu/vpmu.c | 43 +++++--------------------------------------
 1 file changed, 5 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index cacc24a30f..64cdbfc48c 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -376,57 +376,24 @@ void vpmu_save(struct vcpu *v)
     vpmu->last_pcpu = pcpu;
     per_cpu(last_vcpu, pcpu) = v;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( alternative_call(vpmu_ops.arch_vpmu_save, v, 0) )
         vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
     apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
 }
 
 int vpmu_load(struct vcpu *v, bool_t from_guest)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id(), ret;
-    struct vcpu *prev = NULL;
+    int ret;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return 0;
 
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
          (!has_vlapic(vpmu_vcpu(vpmu)->domain) &&
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:34:43 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:34:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414561.658893 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVn-0001DK-Qf; Sat, 01 Oct 2022 15:34:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414561.658893; Sat, 01 Oct 2022 15:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVn-0001DC-O0; Sat, 01 Oct 2022 15:34:43 +0000
Received: by outflank-mailman (input) for mailman id 414561;
 Sat, 01 Oct 2022 15:34:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVn-0001Cw-Dm
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVn-0008Lv-D2
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVn-0003X2-C9
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=A0p65jOJM77yuZjMSFHLVD82zJHY2t6jvySwIFwlcgc=; b=ZlB9uCy019qVBA8FTxRFphBcUm
	VXGpO+vgA68rDOi52O5RYVATKXTSqptODYqVYY9UQxSD/P2EzpHFyqLBGx7mxz+pvb+TMlRnj4vY9
	VCC5iI+zQ0tR84T9hkKyc8Lk5D07WoI6Gql4oTygHtyfng8ie/4E9HLK3vcLIws2dpgU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/NUMA: correct off-by-1 in node map size calculation
Message-Id: <E1oeeVn-0003X2-C9@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:34:43 +0000

commit b1f4b45d02cac2bf704c2fcc61c70c3567cfaa5b
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Sep 30 09:55:34 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Sep 30 09:55:34 2022 +0200

    x86/NUMA: correct off-by-1 in node map size calculation
    
    extract_lsb_from_nodes() accumulates "memtop" from all PDXes one past
    the covered ranges. Hence the maximum address which can validly by used
    to index the node map is one below this value, and we may currently set
    up a node map with an unused (and never initialized) trailing entry. In
    boundary cases this may also mean we dynamically allocate a page when
    the static (64-entry) map would suffice.
    
    While there also correct the comment ahead of the function, for it to
    match the actual code: Linux commit 54413927f022 ("x86-64:
    x86_64-make-the-numa-hash-function-nodemap-allocation fix fix") removed
    the ORing in of the end address before we actually cloned their code.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Wei Chen <Wei.Chen@arm.com>
---
 xen/arch/x86/numa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 1bc82c60aa..4f742414b0 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -111,7 +111,7 @@ static int __init allocate_cachealigned_memnodemap(void)
 }
 
 /*
- * The LSB of all start and end addresses in the node map is the value of the
+ * The LSB of all start addresses in the node map is the value of the
  * maximum possible shift.
  */
 static int __init extract_lsb_from_nodes(const struct node *nodes,
@@ -137,7 +137,7 @@ static int __init extract_lsb_from_nodes(const struct node *nodes,
         i = BITS_PER_LONG - 1;
     else
         i = find_first_bit(&bitfield, sizeof(unsigned long)*8);
-    memnodemapsize = (memtop >> i) + 1;
+    memnodemapsize = ((memtop - 1) >> i) + 1;
     return i;
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:34:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:34:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414562.658897 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVy-0001Gi-SF; Sat, 01 Oct 2022 15:34:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414562.658897; Sat, 01 Oct 2022 15:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeVy-0001Ga-PY; Sat, 01 Oct 2022 15:34:54 +0000
Received: by outflank-mailman (input) for mailman id 414562;
 Sat, 01 Oct 2022 15:34:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVx-0001GP-Gi
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVx-0008M2-G2
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeVx-0003Xr-FI
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:34:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=c0xgC+xenz7gXcXZVMYs3HA871+/P47l0EVMkHXYgOI=; b=qZCDKTEq/Ywb/RFmfnFbHjVE5I
	GTMvW4y5kEooCZ8H3mEuKcD65AFxWtjo49EguJrakZH+vcICNS5cMpxKRHd/DP0e3sQPgKICQ8Wbe
	SF0KnWY4v4LiN6JSo8+5PVY0gdB681r5lSl6Su3Ob38Dx05edYcKIoh6kanRhb7vcgmg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] Arm/vGIC: adjust gicv3_its_deny_access() to fit other gic*_iomem_deny_access(
Message-Id: <E1oeeVx-0003Xr-FI@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:34:53 +0000

commit 38e1276db4c5457cd6e7811b4e168aa85c8a0b06
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Sep 30 09:56:27 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Sep 30 09:56:27 2022 +0200

    Arm/vGIC: adjust gicv3_its_deny_access() to fit other gic*_iomem_deny_access(
    
    While an oversight in 9982fe275ba4 ("arm/vgic: drop const attribute
    from gic_iomem_deny_access()"), the issue really became apparent only
    when iomem_deny_access() was switched to have a non-const first
    parameter.
    
    Fixes: c4e5cc2ccc5b ("x86/ept: limit calls to memory_type_changed()")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>
    Tested-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/gic-v3-its.c             | 2 +-
 xen/arch/arm/include/asm/gic_v3_its.h | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 9558bad96a..e217c21bf8 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -892,7 +892,7 @@ struct pending_irq *gicv3_assign_guest_event(struct domain *d,
     return pirq;
 }
 
-int gicv3_its_deny_access(const struct domain *d)
+int gicv3_its_deny_access(struct domain *d)
 {
     int rc = 0;
     unsigned long mfn, nr;
diff --git a/xen/arch/arm/include/asm/gic_v3_its.h b/xen/arch/arm/include/asm/gic_v3_its.h
index 168617097f..fae3f6ecef 100644
--- a/xen/arch/arm/include/asm/gic_v3_its.h
+++ b/xen/arch/arm/include/asm/gic_v3_its.h
@@ -139,7 +139,7 @@ unsigned long gicv3_its_make_hwdom_madt(const struct domain *d,
 #endif
 
 /* Deny iomem access for its */
-int gicv3_its_deny_access(const struct domain *d);
+int gicv3_its_deny_access(struct domain *d);
 
 bool gicv3_its_host_has_its(void);
 
@@ -206,7 +206,7 @@ static inline unsigned long gicv3_its_make_hwdom_madt(const struct domain *d,
 }
 #endif
 
-static inline int gicv3_its_deny_access(const struct domain *d)
+static inline int gicv3_its_deny_access(struct domain *d)
 {
     return 0;
 }
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 01 15:35:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Oct 2022 15:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.414563.658902 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeW8-0001Ke-Tm; Sat, 01 Oct 2022 15:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 414563.658902; Sat, 01 Oct 2022 15:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oeeW8-0001KW-R7; Sat, 01 Oct 2022 15:35:04 +0000
Received: by outflank-mailman (input) for mailman id 414563;
 Sat, 01 Oct 2022 15:35:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeW7-0001KP-Jl
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:35:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeW7-0008MP-J1
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:35:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oeeW7-0003Yr-IH
 for xen-changelog@lists.xenproject.org; Sat, 01 Oct 2022 15:35:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=sUfmCSm2hQSB98r6LKf+XwwhFNB6/E3Ib2W1WYipBhg=; b=5KR7hV/e3Jd/8EzakiRMFnZLGk
	IAtZbA90WNOCeIwCp/FCC0bDLBnpNisqZPgXY2zx/+L4LBxbNBEVReGQyO3+ROXU7qqZ460ixKnmB
	/wzw+ureBcqpIuwqRMHnFz256+bT57XgJfVgi0gTLZ4yNWp+63Xv+ulSE06GgGTGT/4M=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/NUMA: improve memnode_shift calculation for multi node system
Message-Id: <E1oeeW7-0003Yr-IH@xenbits.xenproject.org>
Date: Sat, 01 Oct 2022 15:35:03 +0000

commit 1666086b00442b23e4fd70f4971e3bcf1a16b124
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Sep 30 15:16:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Sep 30 15:16:22 2022 +0200

    x86/NUMA: improve memnode_shift calculation for multi node system
    
    SRAT may describe individual nodes using multiple ranges. When they're
    adjacent (with or without a gap in between), only the start of the first
    such range actually needs accounting for. Furthermore the very first
    range doesn't need considering of its start address at all, as it's fine
    to associate all lower addresses (with no memory) with that same node.
    For this to work, the array of ranges needs to be sorted by address -
    adjust logic accordingly in acpi_numa_memory_affinity_init().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/numa.c |  3 ++-
 xen/arch/x86/srat.c | 32 ++++++++++++++++++++++++++++----
 2 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 4f742414b0..2c3c1c15fe 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -127,7 +127,8 @@ static int __init extract_lsb_from_nodes(const struct node *nodes,
         epdx = paddr_to_pdx(nodes[i].end - 1) + 1;
         if ( spdx >= epdx )
             continue;
-        bitfield |= spdx;
+        if ( i && (!nodeids || nodeids[i - 1] != nodeids[i]) )
+            bitfield |= spdx;
         if ( !i || !nodeids || nodeids[i - 1] != nodeids[i] )
             nodes_used++;
         if ( epdx > memtop )
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index b62a152911..fbcd8749c4 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -312,6 +312,7 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 	unsigned pxm;
 	nodeid_t node;
 	unsigned int i;
+	bool next = false;
 
 	if (srat_disabled())
 		return;
@@ -413,14 +414,37 @@ acpi_numa_memory_affinity_init(const struct acpi_srat_mem_affinity *ma)
 	       node, pxm, start, end - 1,
 	       ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : "");
 
-	node_memblk_range[num_node_memblks].start = start;
-	node_memblk_range[num_node_memblks].end = end;
-	memblk_nodeid[num_node_memblks] = node;
+	/* Keep node_memblk_range[] sorted by address. */
+	for (i = 0; i < num_node_memblks; ++i)
+		if (node_memblk_range[i].start > start ||
+		    (node_memblk_range[i].start == start &&
+		     node_memblk_range[i].end > end))
+			break;
+
+	memmove(&node_memblk_range[i + 1], &node_memblk_range[i],
+	        (num_node_memblks - i) * sizeof(*node_memblk_range));
+	node_memblk_range[i].start = start;
+	node_memblk_range[i].end = end;
+
+	memmove(&memblk_nodeid[i + 1], &memblk_nodeid[i],
+	        (num_node_memblks - i) * sizeof(*memblk_nodeid));
+	memblk_nodeid[i] = node;
+
 	if (ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) {
-		__set_bit(num_node_memblks, memblk_hotplug);
+		next = true;
 		if (end > mem_hotplug)
 			mem_hotplug = end;
 	}
+	for (; i <= num_node_memblks; ++i) {
+		bool prev = next;
+
+		next = test_bit(i, memblk_hotplug);
+		if (prev)
+			__set_bit(i, memblk_hotplug);
+		else
+			__clear_bit(i, memblk_hotplug);
+	}
+
 	num_node_memblks++;
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Tue Oct 04 01:00:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Oct 2022 01:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.415121.659610 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofWI2-0001Gf-7c; Tue, 04 Oct 2022 01:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 415121.659610; Tue, 04 Oct 2022 01:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofWI2-0001GU-4J; Tue, 04 Oct 2022 01:00:06 +0000
Received: by outflank-mailman (input) for mailman id 415121;
 Tue, 04 Oct 2022 01:00:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofWI0-0000VP-G2
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 01:00:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofWI0-0003jC-Dc
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 01:00:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofWI0-0005i5-Bu
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 01:00:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=bU1dT5IfIkHzIqZRLrsVR4MaQgNM0qlYCCu1EEH573A=; b=yRsMABMe3pa4dpyKxWYGXla36Q
	TMAgbjo3R7eBTJS3O+bMt7/9nqYGxBLGNKhge6H+B+qpp6Aqsb/C2VTlwmS5bsmUhdZPNNYLk5e3f
	VUtTMyayD70QXh+kHHoi5yDUfqd9Rgu6bM1UjgSnmGiZdxL7yNK59J8OrDOX8C4K8olQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen: Add static event channel in SUPPORT.md on ARM
Message-Id: <E1ofWI0-0005i5-Bu@xenbits.xenproject.org>
Date: Tue, 04 Oct 2022 01:00:04 +0000

commit efc220bcbd282dc01db05aa673bd9ed2b42f6632
Author:     Rahul Singh <rahul.singh@arm.com>
AuthorDate: Fri Sep 23 12:02:17 2022 +0100
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Mon Oct 3 17:54:36 2022 -0700

    xen: Add static event channel in SUPPORT.md on ARM
    
    Static event channel support is tech preview, which shall be documented
    in SUPPORT.md
    
    Signed-off-by: Rahul Singh <rahul.singh@arm.com>
    Reviewed-by: Ayan Kumar Halder <ayankuma@amd.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 SUPPORT.md | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 8ebd63ad82..29f74ac506 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -922,6 +922,13 @@ bootscrub=off are passed as Xen command line parameters. (Memory should
 be scrubbed with bootscrub=idle.) No XSAs will be issues due to
 unscrubbed memory.
 
+## Static Event Channel
+
+Allow to setup the static event channel on dom0less system, enabling domains
+to send/receive notifications.
+
+    Status, ARM: Tech Preview
+
 # Format and definitions
 
 This file contains prose, and machine-readable fragments.
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 04 01:00:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Oct 2022 01:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.415122.659614 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofWIC-0004qR-95; Tue, 04 Oct 2022 01:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 415122.659614; Tue, 04 Oct 2022 01:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofWIC-0004pc-6H; Tue, 04 Oct 2022 01:00:16 +0000
Received: by outflank-mailman (input) for mailman id 415122;
 Tue, 04 Oct 2022 01:00:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofWIA-0004Kh-HD
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 01:00:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofWIA-00083F-GV
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 01:00:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofWIA-0005jH-Fm
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 01:00:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=DBlxJ+7ame3xn379LHOQG8z5n8fuTAopBWe28Ip9/kI=; b=DmuOhZBvS0JclRWVff0ltu6yR2
	/IZWLW4QBYFGhlSBv85XAvzMv+XLGB+Ge2xS5UdlmL4ZQNY0FHYYSbL5cygKHk7+FEeHqLg+mwte3
	HWX8VDZPrV4BqPy4hGAFYl0grQrMUFd649FAvKevxtrUsA3H95OI6FtZauKPjYsw2BIc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: fix booting ACPI based system after static evtchn series
Message-Id: <E1ofWIA-0005jH-Fm@xenbits.xenproject.org>
Date: Tue, 04 Oct 2022 01:00:14 +0000

commit 3161231abcb461314b205337fbd0530c7ead1696
Author:     Rahul Singh <rahul.singh@arm.com>
AuthorDate: Fri Sep 23 12:02:18 2022 +0100
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Mon Oct 3 17:56:07 2022 -0700

    xen/arm: fix booting ACPI based system after static evtchn series
    
    When ACPI is enabled and the system booted with ACPI, BUG() is observed
    after merging the static event channel series. As there is no DT when
    booted with ACPI there will be no chosen node because of that
    "BUG_ON(chosen == NULL)" will be hit.
    
    (XEN) Xen BUG at arch/arm/domain_build.c:3578
    
    Move call to alloc_static_evtchn() under acpi_disabled check to fix the
    issue.
    
    Fixes: 1fe16b3ed78a (xen/arm: introduce xen-evtchn dom0less property)
    Signed-off-by: Rahul Singh <rahul.singh@arm.com>
    [stefano: minor spelling fix in commit message]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayankuma@amd.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/setup.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 61b4f258a0..4395640019 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1166,9 +1166,10 @@ void __init start_xen(unsigned long boot_phys_offset,
         printk(XENLOG_INFO "Xen dom0less mode detected\n");
 
     if ( acpi_disabled )
+    {
         create_domUs();
-
-    alloc_static_evtchn();
+        alloc_static_evtchn();
+    }
 
     /*
      * This needs to be called **before** heap_init_late() so modules
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 04 18:33:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Oct 2022 18:33:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.415777.660422 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofmj0-00082s-Pz; Tue, 04 Oct 2022 18:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 415777.660422; Tue, 04 Oct 2022 18:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofmj0-00082j-Mc; Tue, 04 Oct 2022 18:33:02 +0000
Received: by outflank-mailman (input) for mailman id 415777;
 Tue, 04 Oct 2022 18:33:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofmiz-00082d-Jo
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 18:33:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofmiz-0000zN-J5
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 18:33:01 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofmiz-0001BS-GR
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 18:33:01 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Cl722TM3Ax3aoF4WaaheZuaAVL2ctYq7etM29n64EYg=; b=xzGLtO9i0Lk7Q5Y/7JYU2WPDiN
	S2pYu+ChafoizWFoDqEyGDhrTRb2J0xrnmjEnZJp/QqhYqmDXNcFcOXj6WrdzR1qagmKb9bofIfkS
	Ef1ILzYJ7fJyYZTINqbKwuhqLl0EOKy1poMGyD+9zNEYlvI9iz6/jj+fm3L6+VCPzwLY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen: Add static event channel in SUPPORT.md on ARM
Message-Id: <E1ofmiz-0001BS-GR@xenbits.xenproject.org>
Date: Tue, 04 Oct 2022 18:33:01 +0000

commit efc220bcbd282dc01db05aa673bd9ed2b42f6632
Author:     Rahul Singh <rahul.singh@arm.com>
AuthorDate: Fri Sep 23 12:02:17 2022 +0100
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Mon Oct 3 17:54:36 2022 -0700

    xen: Add static event channel in SUPPORT.md on ARM
    
    Static event channel support is tech preview, which shall be documented
    in SUPPORT.md
    
    Signed-off-by: Rahul Singh <rahul.singh@arm.com>
    Reviewed-by: Ayan Kumar Halder <ayankuma@amd.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 SUPPORT.md | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 8ebd63ad82..29f74ac506 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -922,6 +922,13 @@ bootscrub=off are passed as Xen command line parameters. (Memory should
 be scrubbed with bootscrub=idle.) No XSAs will be issues due to
 unscrubbed memory.
 
+## Static Event Channel
+
+Allow to setup the static event channel on dom0less system, enabling domains
+to send/receive notifications.
+
+    Status, ARM: Tech Preview
+
 # Format and definitions
 
 This file contains prose, and machine-readable fragments.
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Tue Oct 04 18:33:12 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Oct 2022 18:33:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.415778.660424 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofmjA-00084s-Qw; Tue, 04 Oct 2022 18:33:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 415778.660424; Tue, 04 Oct 2022 18:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ofmjA-00084k-O9; Tue, 04 Oct 2022 18:33:12 +0000
Received: by outflank-mailman (input) for mailman id 415778;
 Tue, 04 Oct 2022 18:33:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofmj9-00084W-NG
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 18:33:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofmj9-0000zh-MS
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 18:33:11 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ofmj9-0001Bv-LU
 for xen-changelog@lists.xenproject.org; Tue, 04 Oct 2022 18:33:11 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=sNqHg+O+MbrwtAQuDdTXy0G510Qy+czRQyYtF2Buh90=; b=w2vu3dKWOIELVSC3p5YJoQkGf+
	f7AxN0DbAatCTBZ8GVnN93d2Z6Q52CB8s4wy20DP+6k7w839xDid0mwFv4u9fv/4bgIR9PlFg5GmZ
	9tCGL4uInT0Era6LX1uVAlvWXmnTJq+MyZopuKbBphf+qLExaxWd8oHGvTkf4MNb7vdg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: fix booting ACPI based system after static evtchn series
Message-Id: <E1ofmj9-0001Bv-LU@xenbits.xenproject.org>
Date: Tue, 04 Oct 2022 18:33:11 +0000

commit 3161231abcb461314b205337fbd0530c7ead1696
Author:     Rahul Singh <rahul.singh@arm.com>
AuthorDate: Fri Sep 23 12:02:18 2022 +0100
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Mon Oct 3 17:56:07 2022 -0700

    xen/arm: fix booting ACPI based system after static evtchn series
    
    When ACPI is enabled and the system booted with ACPI, BUG() is observed
    after merging the static event channel series. As there is no DT when
    booted with ACPI there will be no chosen node because of that
    "BUG_ON(chosen == NULL)" will be hit.
    
    (XEN) Xen BUG at arch/arm/domain_build.c:3578
    
    Move call to alloc_static_evtchn() under acpi_disabled check to fix the
    issue.
    
    Fixes: 1fe16b3ed78a (xen/arm: introduce xen-evtchn dom0less property)
    Signed-off-by: Rahul Singh <rahul.singh@arm.com>
    [stefano: minor spelling fix in commit message]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayankuma@amd.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/setup.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 61b4f258a0..4395640019 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1166,9 +1166,10 @@ void __init start_xen(unsigned long boot_phys_offset,
         printk(XENLOG_INFO "Xen dom0less mode detected\n");
 
     if ( acpi_disabled )
+    {
         create_domUs();
-
-    alloc_static_evtchn();
+        alloc_static_evtchn();
+    }
 
     /*
      * This needs to be called **before** heap_init_late() so modules
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Wed Oct 05 09:00:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Oct 2022 09:00:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.415968.660596 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1og0G6-00081Y-Bi; Wed, 05 Oct 2022 09:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 415968.660596; Wed, 05 Oct 2022 09:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1og0G6-00081R-8n; Wed, 05 Oct 2022 09:00:06 +0000
Received: by outflank-mailman (input) for mailman id 415968;
 Wed, 05 Oct 2022 09:00:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1og0G4-0007og-OJ
 for xen-changelog@lists.xenproject.org; Wed, 05 Oct 2022 09:00:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1og0G4-0004Ap-Lt
 for xen-changelog@lists.xenproject.org; Wed, 05 Oct 2022 09:00:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1og0G4-0006cf-KE
 for xen-changelog@lists.xenproject.org; Wed, 05 Oct 2022 09:00:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=41/ITrW02+p4f4ZTTbnQkfXIoreOUBv7IaAdWqcqxKU=; b=EKOEzxdKg8YHyXXpJHYPXvfMwi
	y69XGURBaRXysxBITfTzQkiS2JxxxDdYQy2yneTVsnF8LtYJdLOEXINAYUrrXQeMlcLOvJf1WeX81
	1B7v0i6g5NjZ94FOOXVeGYh7VY2j2jzMrBc/I5qqpD4wlZXLkSEbqG5FAujmWekeWiPg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/NUMA: correct off-by-1 in node map population
Message-Id: <E1og0G4-0006cf-KE@xenbits.xenproject.org>
Date: Wed, 05 Oct 2022 09:00:04 +0000

commit 66a5633aa038f4abb4455463755974febac69034
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 5 10:55:27 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 5 10:55:27 2022 +0200

    x86/NUMA: correct off-by-1 in node map population
    
    As it turns out populate_memnodemap() so far "relied" on
    extract_lsb_from_nodes() setting memnodemapsize one too high in edge
    cases. Correct the issue there as well, by changing "epdx" to be an
    inclusive PDX and adjusting the respective relational operators.
    
    While there also limit the scope of both related variables.
    
    Fixes: b1f4b45d02ca ("x86/NUMA: correct off-by-1 in node map size calculation")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/numa.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 2c3c1c15fe..322157fab7 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -65,15 +65,15 @@ int srat_disabled(void)
 static int __init populate_memnodemap(const struct node *nodes,
                                       int numnodes, int shift, nodeid_t *nodeids)
 {
-    unsigned long spdx, epdx;
     int i, res = -1;
 
     memset(memnodemap, NUMA_NO_NODE, memnodemapsize * sizeof(*memnodemap));
     for ( i = 0; i < numnodes; i++ )
     {
-        spdx = paddr_to_pdx(nodes[i].start);
-        epdx = paddr_to_pdx(nodes[i].end - 1) + 1;
-        if ( spdx >= epdx )
+        unsigned long spdx = paddr_to_pdx(nodes[i].start);
+        unsigned long epdx = paddr_to_pdx(nodes[i].end - 1);
+
+        if ( spdx > epdx )
             continue;
         if ( (epdx >> shift) >= memnodemapsize )
             return 0;
@@ -88,7 +88,7 @@ static int __init populate_memnodemap(const struct node *nodes,
                 memnodemap[spdx >> shift] = nodeids[i];
 
             spdx += (1UL << shift);
-        } while ( spdx < epdx );
+        } while ( spdx <= epdx );
         res = 1;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Wed Oct 05 21:44:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Oct 2022 21:44:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.416487.661125 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ogCBO-0007ea-T7; Wed, 05 Oct 2022 21:44:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 416487.661125; Wed, 05 Oct 2022 21:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ogCBO-0007eQ-Pw; Wed, 05 Oct 2022 21:44:02 +0000
Received: by outflank-mailman (input) for mailman id 416487;
 Wed, 05 Oct 2022 21:44:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogCBO-0007eK-2K
 for xen-changelog@lists.xenproject.org; Wed, 05 Oct 2022 21:44:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogCBO-0001X7-1R
 for xen-changelog@lists.xenproject.org; Wed, 05 Oct 2022 21:44:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogCBO-0005dQ-0e
 for xen-changelog@lists.xenproject.org; Wed, 05 Oct 2022 21:44:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=V7L8mRXNc5DCwUT2ifOnqOzCK/2NM2oywWlPLiv4xXA=; b=J+VpzrlreRP2XLup02BYz4HMkT
	tHx0yqf6671kPig27qEGEQsCTangCm4EKFa/lY0jwSwcFAcnP2hHNd9c1GGRMfOPJN2BWILXCjUbH
	nSRedS5pSdDNnxpogfhkBXDoXNVK96ee9ijKy3yoF0MfjSRkEoVrMrg5urdeeBu7i5DE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/NUMA: correct off-by-1 in node map population
Message-Id: <E1ogCBO-0005dQ-0e@xenbits.xenproject.org>
Date: Wed, 05 Oct 2022 21:44:02 +0000

commit 66a5633aa038f4abb4455463755974febac69034
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 5 10:55:27 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 5 10:55:27 2022 +0200

    x86/NUMA: correct off-by-1 in node map population
    
    As it turns out populate_memnodemap() so far "relied" on
    extract_lsb_from_nodes() setting memnodemapsize one too high in edge
    cases. Correct the issue there as well, by changing "epdx" to be an
    inclusive PDX and adjusting the respective relational operators.
    
    While there also limit the scope of both related variables.
    
    Fixes: b1f4b45d02ca ("x86/NUMA: correct off-by-1 in node map size calculation")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/numa.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 2c3c1c15fe..322157fab7 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -65,15 +65,15 @@ int srat_disabled(void)
 static int __init populate_memnodemap(const struct node *nodes,
                                       int numnodes, int shift, nodeid_t *nodeids)
 {
-    unsigned long spdx, epdx;
     int i, res = -1;
 
     memset(memnodemap, NUMA_NO_NODE, memnodemapsize * sizeof(*memnodemap));
     for ( i = 0; i < numnodes; i++ )
     {
-        spdx = paddr_to_pdx(nodes[i].start);
-        epdx = paddr_to_pdx(nodes[i].end - 1) + 1;
-        if ( spdx >= epdx )
+        unsigned long spdx = paddr_to_pdx(nodes[i].start);
+        unsigned long epdx = paddr_to_pdx(nodes[i].end - 1);
+
+        if ( spdx > epdx )
             continue;
         if ( (epdx >> shift) >= memnodemapsize )
             return 0;
@@ -88,7 +88,7 @@ static int __init populate_memnodemap(const struct node *nodes,
                 memnodemap[spdx >> shift] = nodeids[i];
 
             spdx += (1UL << shift);
-        } while ( spdx < epdx );
+        } while ( spdx <= epdx );
         res = 1;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 07 13:33:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Oct 2022 13:33:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.418012.662768 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ognTO-0002Vx-1N; Fri, 07 Oct 2022 13:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 418012.662768; Fri, 07 Oct 2022 13:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ognTN-0002Vp-Uy; Fri, 07 Oct 2022 13:33:05 +0000
Received: by outflank-mailman (input) for mailman id 418012;
 Fri, 07 Oct 2022 13:33:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ognTM-0002Vj-LS
 for xen-changelog@lists.xenproject.org; Fri, 07 Oct 2022 13:33:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ognTM-0002Yt-JN
 for xen-changelog@lists.xenproject.org; Fri, 07 Oct 2022 13:33:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ognTM-0004uy-IV
 for xen-changelog@lists.xenproject.org; Fri, 07 Oct 2022 13:33:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=TuUqS6cTS3/6NCxk/sRcZXTK1+asu4uNvyPrOGpppWU=; b=7Kki+CgNwPOH91o9znJWlilbCN
	K5rk5vp3uH4V6ppTMoXfvErviBx3E+rO+4vZNxvsocODYDU6upOy+linKY5g4IaCXIoKXEElaY2W/
	pz+An7HfWZQY4NZ11wr+3gtUHGPN/pNFEs0UhhJ+nNlZnyJzCTh1W6ZOlWMH9Y7+lMk4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] Config.mk pin QEMU_UPSTREAM_REVISION (prep for Xen 4.17 RC1)
Message-Id: <E1ognTM-0004uy-IV@xenbits.xenproject.org>
Date: Fri, 07 Oct 2022 13:33:04 +0000

commit b4ddd34d3a199167d48a50c72729be397c50f8cd
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Fri Oct 7 10:13:40 2022 +0100
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 7 14:30:01 2022 +0100

    Config.mk pin QEMU_UPSTREAM_REVISION (prep for Xen 4.17 RC1)
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 Config.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 69af1e60d4..e0ce593468 100644
--- a/Config.mk
+++ b/Config.mk
@@ -229,7 +229,7 @@ SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 MINIOS_UPSTREAM_URL ?= git://xenbits.xen.org/mini-os.git
 endif
 OVMF_UPSTREAM_REVISION ?= 7b4a99be8a39c12d3a7fc4b8db9f0eab4ac688d5
-QEMU_UPSTREAM_REVISION ?= master
+QEMU_UPSTREAM_REVISION ?= b746458e1ce1bec85e58b458386f8b7a0bedfaa6
 MINIOS_UPSTREAM_REVISION ?= 5bcb28aaeba1c2506a82fab0cdad0201cd9b54b3
 
 SEABIOS_UPSTREAM_REVISION ?= rel-1.16.0
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 07 13:33:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Oct 2022 13:33:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.418013.662772 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ognTY-0002Xu-38; Fri, 07 Oct 2022 13:33:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 418013.662772; Fri, 07 Oct 2022 13:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ognTY-0002Xm-0K; Fri, 07 Oct 2022 13:33:16 +0000
Received: by outflank-mailman (input) for mailman id 418013;
 Fri, 07 Oct 2022 13:33:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ognTW-0002XZ-Nj
 for xen-changelog@lists.xenproject.org; Fri, 07 Oct 2022 13:33:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ognTW-0002Z3-Mq
 for xen-changelog@lists.xenproject.org; Fri, 07 Oct 2022 13:33:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ognTW-0004vR-Lp
 for xen-changelog@lists.xenproject.org; Fri, 07 Oct 2022 13:33:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=qV76pd/vpJg1yujqaB8CCoH9bTMi1jbdXQBbif8Aefs=; b=i3+QJ2GEtHEv47pC4hq239V8Qu
	5EEZgVnTLHHqDaiiTizx28FZn1nVHSl1qELuUz9Q4J3cEHAwmalz9MMy9wUSEIU8RPU38F/6qPwpl
	hzsqOymDSGRGrFBeT5s6s+LNX87XWDJHRAlk9moQM4S9tiXifZ5gUedcjdyZ5JuRK0nU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] Update Xen version to 4.17-rc
Message-Id: <E1ognTW-0004vR-Lp@xenbits.xenproject.org>
Date: Fri, 07 Oct 2022 13:33:14 +0000

commit 9029bc265cdf2bd63376dde9fdd91db4ce9c0586
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Fri Oct 7 10:13:41 2022 +0100
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 7 14:30:01 2022 +0100

    Update Xen version to 4.17-rc
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 README       | 16 ++++++++--------
 SUPPORT.md   |  2 +-
 xen/Makefile |  2 +-
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/README b/README
index 89a1d0b43c..2fdca8861b 100644
--- a/README
+++ b/README
@@ -1,11 +1,11 @@
-############################################################
-__  __                                _        _     _
-\ \/ /___ _ __        _   _ _ __  ___| |_ __ _| |__ | | ___
- \  // _ \ '_ \ _____| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
- /  \  __/ | | |_____| |_| | | | \__ \ || (_| | |_) | |  __/
-/_/\_\___|_| |_|      \__,_|_| |_|___/\__\__,_|_.__/|_|\___|
-
-############################################################
+###############################################
+__  __            _  _    _ _____
+\ \/ /___ _ __   | || |  / |___  |    _ __ ___
+ \  // _ \ '_ \  | || |_ | |  / /____| '__/ __|
+ /  \  __/ | | | |__   _|| | / /_____| | | (__
+/_/\_\___|_| |_|    |_|(_)_|/_/      |_|  \___|
+
+###############################################
 
 https://www.xen.org/
 
diff --git a/SUPPORT.md b/SUPPORT.md
index 29f74ac506..cf2ddfacaf 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -9,7 +9,7 @@ for the definitions of the support status levels etc.
 
 # Release Support
 
-    Xen-Version: unstable
+    Xen-Version: 4.17-rc
     Initial-Release: n/a
     Supported-Until: TBD
     Security-Support-Until: Unreleased - not yet security-supported
diff --git a/xen/Makefile b/xen/Makefile
index 4e6e661261..9d0df5e2c5 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -6,7 +6,7 @@ this-makefile := $(call lastword,$(MAKEFILE_LIST))
 # All other places this is stored (eg. compile.h) should be autogenerated.
 export XEN_VERSION       = 4
 export XEN_SUBVERSION    = 17
-export XEN_EXTRAVERSION ?= -unstable$(XEN_VENDORVERSION)
+export XEN_EXTRAVERSION ?= -rc$(XEN_VENDORVERSION)
 export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
 -include xen-version
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Sat Oct 08 01:44:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Oct 2022 01:44:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.418396.663200 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ogysl-0000ey-QX; Sat, 08 Oct 2022 01:44:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 418396.663200; Sat, 08 Oct 2022 01:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ogysl-0000ep-NH; Sat, 08 Oct 2022 01:44:03 +0000
Received: by outflank-mailman (input) for mailman id 418396;
 Sat, 08 Oct 2022 01:44:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogysk-0000ej-Ip
 for xen-changelog@lists.xenproject.org; Sat, 08 Oct 2022 01:44:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogysk-0002oX-Aq
 for xen-changelog@lists.xenproject.org; Sat, 08 Oct 2022 01:44:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogysk-0000yV-9t
 for xen-changelog@lists.xenproject.org; Sat, 08 Oct 2022 01:44:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=fEc6a4eXAcUf/3RC5huseEwgv4DZfN2BB09BJ1W1D/c=; b=vL3j+M8lLTiJ5QK8B5ukhFaxqI
	jKqWxUOCyivLhQIKpX8YCEZnM63bkonizxIO7TNuelmfIIWGOJShqpulbCm/7IQEfoKjUx47rwn+x
	HdLaSJIcMGM7OJfgrdkHZAh0Zq/BnUriVSNJxoTPFAf3f5gxUzRxlxv1bqMrRrs/qxpA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] Config.mk pin QEMU_UPSTREAM_REVISION (prep for Xen 4.17 RC1)
Message-Id: <E1ogysk-0000yV-9t@xenbits.xenproject.org>
Date: Sat, 08 Oct 2022 01:44:02 +0000

commit b4ddd34d3a199167d48a50c72729be397c50f8cd
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Fri Oct 7 10:13:40 2022 +0100
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 7 14:30:01 2022 +0100

    Config.mk pin QEMU_UPSTREAM_REVISION (prep for Xen 4.17 RC1)
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 Config.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 69af1e60d4..e0ce593468 100644
--- a/Config.mk
+++ b/Config.mk
@@ -229,7 +229,7 @@ SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 MINIOS_UPSTREAM_URL ?= git://xenbits.xen.org/mini-os.git
 endif
 OVMF_UPSTREAM_REVISION ?= 7b4a99be8a39c12d3a7fc4b8db9f0eab4ac688d5
-QEMU_UPSTREAM_REVISION ?= master
+QEMU_UPSTREAM_REVISION ?= b746458e1ce1bec85e58b458386f8b7a0bedfaa6
 MINIOS_UPSTREAM_REVISION ?= 5bcb28aaeba1c2506a82fab0cdad0201cd9b54b3
 
 SEABIOS_UPSTREAM_REVISION ?= rel-1.16.0
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 08 01:44:12 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Oct 2022 01:44:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.418397.663204 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ogysu-0000gc-RX; Sat, 08 Oct 2022 01:44:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 418397.663204; Sat, 08 Oct 2022 01:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ogysu-0000gW-Oq; Sat, 08 Oct 2022 01:44:12 +0000
Received: by outflank-mailman (input) for mailman id 418397;
 Sat, 08 Oct 2022 01:44:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogysu-0000gQ-Ey
 for xen-changelog@lists.xenproject.org; Sat, 08 Oct 2022 01:44:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogysu-0002ob-E7
 for xen-changelog@lists.xenproject.org; Sat, 08 Oct 2022 01:44:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ogysu-0000z5-DA
 for xen-changelog@lists.xenproject.org; Sat, 08 Oct 2022 01:44:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=CGgz9aaCehvFN5pP1ohcbb7qFofiPbp0yVO4EC+XKYU=; b=LzHsDnwyFys3WumIagKHjQiPCW
	h0PkIbFJmlBjtYMiCbmUXRPyofA1EUlVcnKvIOryaqEt1UtPJ+Eqi4TJR1YdxoXNbCK+z73zsDR66
	h/03YRRV3cY8kAA5/pXvocFOvFL6si7xs//Ze0FIcq9fGBVvUvLNGwjTkpY7owYKgX1o=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] Update Xen version to 4.17-rc
Message-Id: <E1ogysu-0000z5-DA@xenbits.xenproject.org>
Date: Sat, 08 Oct 2022 01:44:12 +0000

commit 9029bc265cdf2bd63376dde9fdd91db4ce9c0586
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Fri Oct 7 10:13:41 2022 +0100
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 7 14:30:01 2022 +0100

    Update Xen version to 4.17-rc
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 README       | 16 ++++++++--------
 SUPPORT.md   |  2 +-
 xen/Makefile |  2 +-
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/README b/README
index 89a1d0b43c..2fdca8861b 100644
--- a/README
+++ b/README
@@ -1,11 +1,11 @@
-############################################################
-__  __                                _        _     _
-\ \/ /___ _ __        _   _ _ __  ___| |_ __ _| |__ | | ___
- \  // _ \ '_ \ _____| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
- /  \  __/ | | |_____| |_| | | | \__ \ || (_| | |_) | |  __/
-/_/\_\___|_| |_|      \__,_|_| |_|___/\__\__,_|_.__/|_|\___|
-
-############################################################
+###############################################
+__  __            _  _    _ _____
+\ \/ /___ _ __   | || |  / |___  |    _ __ ___
+ \  // _ \ '_ \  | || |_ | |  / /____| '__/ __|
+ /  \  __/ | | | |__   _|| | / /_____| | | (__
+/_/\_\___|_| |_|    |_|(_)_|/_/      |_|  \___|
+
+###############################################
 
 https://www.xen.org/
 
diff --git a/SUPPORT.md b/SUPPORT.md
index 29f74ac506..cf2ddfacaf 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -9,7 +9,7 @@ for the definitions of the support status levels etc.
 
 # Release Support
 
-    Xen-Version: unstable
+    Xen-Version: 4.17-rc
     Initial-Release: n/a
     Supported-Until: TBD
     Security-Support-Until: Unreleased - not yet security-supported
diff --git a/xen/Makefile b/xen/Makefile
index 4e6e661261..9d0df5e2c5 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -6,7 +6,7 @@ this-makefile := $(call lastword,$(MAKEFILE_LIST))
 # All other places this is stored (eg. compile.h) should be autogenerated.
 export XEN_VERSION       = 4
 export XEN_SUBVERSION    = 17
-export XEN_EXTRAVERSION ?= -unstable$(XEN_VENDORVERSION)
+export XEN_EXTRAVERSION ?= -rc$(XEN_VENDORVERSION)
 export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
 -include xen-version
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:33:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:33:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420157.664722 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERV-00008E-LX; Tue, 11 Oct 2022 12:33:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420157.664722; Tue, 11 Oct 2022 12:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERV-000086-Ii; Tue, 11 Oct 2022 12:33:05 +0000
Received: by outflank-mailman (input) for mailman id 420157;
 Tue, 11 Oct 2022 12:33:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERU-00007y-Db
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERU-0001N8-Bv
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERU-0006yt-AB
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=JSvnuc3soXHOi8H3sisVYg7dJluNC+D27Pbilif15vY=; b=19kvhYB071JdUduCr8i17QGzAe
	vbLxxxxh5a97KXIWDRPvkjHOhRsxDPGZd++q8vJpQn73ESkINM/Qo2s/zdmKJfzASFCKQQSP/opgL
	b2w3UxIXug8o8i47qFr5Idp+eLEOFlcByymgePCv7cBJ75ufKhmPWko2HXHduEu0KG4Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oiERU-0006yt-AB@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:33:04 +0000

commit 3ebe773293e3b945460a3d6f54f3b91915397bab
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Mon Jun 6 06:17:25 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:20:18 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8449f97fe7..c2e0b116c4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1092,6 +1092,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1634,6 +1643,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:33:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:33:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420160.664726 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERg-0000CZ-NI; Tue, 11 Oct 2022 12:33:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420160.664726; Tue, 11 Oct 2022 12:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERg-0000CP-KR; Tue, 11 Oct 2022 12:33:16 +0000
Received: by outflank-mailman (input) for mailman id 420160;
 Tue, 11 Oct 2022 12:33:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERe-0000CF-G1
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERe-0001NZ-FB
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERe-0006zX-ED
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4F5DeiTvptCQWFANaF/uzwuVPhIJGm866kD61T3flZQ=; b=fVudPLVK0d1KXh2xs/JvbYn6O9
	0TXmwdmXaT/e5q6lxoFcaR7V5mdOVZcXQK9eMLVFv49oFk/d4D3+jUMjAot8CqyW16fUAyW5HfJIe
	Us/Hh3+3XcYy0QJzN4Qp8oZO684TD3SSqIbG3bUpZtLfDuQ8J+DYkB4TwbT05W0dgDmU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oiERe-0006zX-ED@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:33:14 +0000

commit 3202084566bba0ef0c45caf8c24302f83d92f9c8
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Mon Jun 6 06:17:26 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:20:56 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/domain.c          | 10 +++++++--
 xen/arch/arm/include/asm/p2m.h | 13 ++++++++++--
 xen/arch/arm/p2m.c             | 47 +++++++++++++++++++++++++++++++++++++++---
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2d6253181a..746ad3438a 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -795,10 +795,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -1001,6 +1001,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1061,6 +1062,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 8cce459b67..a15ea67f9b 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -192,8 +192,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c2e0b116c4..b445f4d754 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1551,17 +1551,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:33:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:33:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420161.664730 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERp-0000Fl-Os; Tue, 11 Oct 2022 12:33:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420161.664730; Tue, 11 Oct 2022 12:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERp-0000Fa-Lv; Tue, 11 Oct 2022 12:33:25 +0000
Received: by outflank-mailman (input) for mailman id 420161;
 Tue, 11 Oct 2022 12:33:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERo-0000FI-Jy
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERo-0001Nk-JD
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERo-000700-HN
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4q4/C6c6j1XfnEQ24Yj2S5goRlW7PcMTUWw28d8GJg4=; b=sMDzaa9LEEQ7/sRuSqhcbOlqAo
	YBMvDm8n9vftdB6ZznfrreqCMSbAGUUNpn38mHFfnUxlWdPGnusjL5XIEgtrz7TzpMdkjWniwGLf8
	7JjRP8I3Uw6zX0rWooKB9wrQyqcCG724mVd63s3dVlB1evxbYkhVkDXPzO5rIdGlJtl0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oiERo-000700-HN@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:33:24 +0000

commit 1df52a270225527ae27bfa2fc40347bf93b78357
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:21:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:21:23 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/include/asm/p2m.h  |  2 +-
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m-basic.c     | 18 ++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 4 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h
index 0a0f7114f3..bafbd96052 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -600,7 +600,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add/remove a page to/from a domain's p2m table. */
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 79929774e8..9e0b725c59 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -541,18 +541,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c
index 9130fc2a70..3231aaa9ba 100644
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -154,10 +154,10 @@ int p2m_init(struct domain *d)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 {
 #ifdef CONFIG_HVM
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if ( !p2m )
@@ -171,10 +171,20 @@ void p2m_teardown(struct p2m_domain *p2m)
     ASSERT(atomic_read(&d->shr_pages) == 0);
 #endif
 
-    p2m->phys_table = pagetable_null();
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
 
     p2m_unlock(p2m);
 #endif
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0247f0c84e..3e1e43a389 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2707,7 +2707,7 @@ int shadow_enable(struct domain *d, u32 mode)
  out_unlocked:
 #ifdef CONFIG_HVM
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
 #endif
     if ( rv != 0 && pg != NULL )
     {
@@ -2873,7 +2873,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:33:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420162.664733 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERz-0000JR-Q7; Tue, 11 Oct 2022 12:33:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420162.664733; Tue, 11 Oct 2022 12:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiERz-0000JJ-NW; Tue, 11 Oct 2022 12:33:35 +0000
Received: by outflank-mailman (input) for mailman id 420162;
 Tue, 11 Oct 2022 12:33:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERy-0000Ix-N6
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERy-0001Nw-ML
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiERy-00070V-LE
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=/be2DSBslZ2YT6duV+6i7ayhV6Rv+BuxYQC07m9/dmQ=; b=NP3eJbycwJaYw8AVZ7ito7iYbd
	bKO0gCh76GTznrzZ1TsuedIevoGLxw1cQDSpD2Xd6qH7DIkZjoVucU6QYGTunJz3qydMRXZfKyu3p
	eE37VrV8wrVarhIBFmHtnvs/0csk1h7j0++B9vDk2TqoST+Hxkvzl7nJ/nHAWH5S2xZ8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oiERy-00070V-LE@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:33:34 +0000

commit 5b44a61180f4f2e4f490a28400c884dd357ff45d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:21:56 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:21:56 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9e0b725c59..691d5d2dd1 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -763,6 +769,9 @@ static void cf_check hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -771,6 +780,7 @@ static void cf_check hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:33:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:33:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420164.664738 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiES9-0000MF-Rs; Tue, 11 Oct 2022 12:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420164.664738; Tue, 11 Oct 2022 12:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiES9-0000M7-P3; Tue, 11 Oct 2022 12:33:45 +0000
Received: by outflank-mailman (input) for mailman id 420164;
 Tue, 11 Oct 2022 12:33:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiES8-0000Lr-Q7
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiES8-0001OI-PO
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiES8-000718-Oh
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Lgh/n8AXzCu21pkmlJUCKtFswwOMA7RSRGerL1J4idU=; b=UsR7UvrLrzeI6SOVIqz6jhMhHE
	tE+8ZL+3JiZDp+tnFBLzX+4Bp61XpqIDidn1rWMUpig4tvqzXJ/xh5db+tMYqXTp3YPRgknx8PGUS
	dbZ4y/17L3aC2DiabJJN+U+rRy9QMyjALIPSreDATueeasm+56XrnVs6Wrpl1+iJOit0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oiES8-000718-Oh@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:33:44 +0000

commit eac000978c1feb5a9ee3236ab0c0da9a477e5336
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:22:24 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:22:24 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/mm/shadow/common.c |  1 +
 xen/arch/x86/mm/shadow/multi.c  | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 3e1e43a389..a1961291a2 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2521,6 +2521,7 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index e10de449f1..a51ec5d4f5 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3316,6 +3316,11 @@ static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
@@ -3372,6 +3377,11 @@ static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #else
 #error This should never happen
 #endif
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:33:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:33:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420165.664742 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESJ-0000PJ-TD; Tue, 11 Oct 2022 12:33:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420165.664742; Tue, 11 Oct 2022 12:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESJ-0000PB-Qf; Tue, 11 Oct 2022 12:33:55 +0000
Received: by outflank-mailman (input) for mailman id 420165;
 Tue, 11 Oct 2022 12:33:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESI-0000Ow-Te
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESI-0001OT-Sp
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESI-00073P-Rm
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:33:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=QIyUVoxyPBfk8ducMQDlZquXAa3suBIq3PVPPObZ+Jo=; b=J5Thd9ttjhXLFUGR/7WVILnJ9Z
	Tuy1pYMc2J73eCPjNCT8hvOejP5QyPObl+5pA8yeaDzCYqQh3+S2XGML0MBcB61OTdQPB9Meiubli
	eFEznGEFHdrkrcg2qB/W4dUIbguEqXRPeklsRRzmQ5Vih4MZYU32A8056ai889YZYxVE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oiESI-00073P-Rm@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:33:54 +0000

commit b7f93c6afb12b6061e2d19de2f39ea09b569ac68
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:22:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:22:53 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/shadow/common.c  | 69 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/hvm.c     |  4 ++-
 xen/arch/x86/mm/shadow/multi.c   | 11 +++++--
 xen/arch/x86/mm/shadow/private.h |  3 +-
 4 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index a1961291a2..5b24be5325 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -928,14 +929,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -951,7 +953,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -974,7 +977,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -987,7 +990,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -995,9 +1003,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1218,7 +1236,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *cf_check
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1236,16 +1254,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1336,7 +1356,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2339,12 +2361,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2408,6 +2431,9 @@ static void sh_update_paging_modes(struct vcpu *v)
             mfn_t mmfn = sh_make_monitor_table(
                              v, v->arch.paging.mode->shadow.shadow_levels);
 
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2446,6 +2472,12 @@ static void sh_update_paging_modes(struct vcpu *v)
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = sh_make_monitor_table(
                               v, v->arch.paging.mode->shadow.shadow_levels);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    sh_destroy_monitor_table(v, old_mfn,
+                                             old_mode->shadow.shadow_levels);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
@@ -2531,7 +2563,12 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index c084bc8ed7..29a58d9131 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -697,7 +697,9 @@ mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels)
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
+
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
     mfn_to_page(m4mfn)->shadow_flags = 4;
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index a51ec5d4f5..2370b30602 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2447,9 +2447,14 @@ static int cf_check sh_page_fault(
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 3a74f45362..85bb26c7ea 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -383,7 +383,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:34:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420167.664745 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESU-0000TV-0J; Tue, 11 Oct 2022 12:34:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420167.664745; Tue, 11 Oct 2022 12:34:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiEST-0000TN-Ts; Tue, 11 Oct 2022 12:34:05 +0000
Received: by outflank-mailman (input) for mailman id 420167;
 Tue, 11 Oct 2022 12:34:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiEST-0000T6-0S
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESS-0001Om-Vu
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESS-00074J-VA
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4L9VKeHTqUKxJaiPjn9fCawDsIkRBwcVPdKRjZg74CU=; b=clPSV5WLdiCKke5+yQLp/pPVOu
	IrNKvPYTHL3yu/1VyG5b9wejYd2ZyaMMaQTLwfO2oJhNBTWV/g6ryPKXrm/uVm01MFcfr9y5NzMGK
	WJDWJuhUF+DILMoQY0vGin7FcD6qHYNxXH0oeupQNU2e1kpNEnSUifLNmnSvC2vyO6h0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oiESS-00074J-VA@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:34:04 +0000

commit ff600a8cf8e36f8ecbffecf96a035952e022ab87
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:23:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:23:22 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 691d5d2dd1..9ce2123c42 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -245,6 +245,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -281,7 +284,7 @@ static struct page_info *cf_check hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 5b24be5325..8cca19ef84 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -939,6 +939,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -991,7 +995,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1005,10 +1009,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1238,6 +1245,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:34:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:34:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420169.664750 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESe-0000WZ-20; Tue, 11 Oct 2022 12:34:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420169.664750; Tue, 11 Oct 2022 12:34:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESd-0000WR-VQ; Tue, 11 Oct 2022 12:34:15 +0000
Received: by outflank-mailman (input) for mailman id 420169;
 Tue, 11 Oct 2022 12:34:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESd-0000WF-3M
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESd-0001PC-2Z
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESd-00075F-1r
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=xjTh9M1TW2FqgwYBquOMswjYmLmnWLtcUkwcYk9D8xw=; b=vCx194j/jv2cfG2mkXh13JEJN3
	Rko7qthVuAQlymIkYKCFaJNE2MqMWJc7OONi/L8HxzZecnMARoiTGr9JrF6Ce+M8ZvOTwNIQ8jw5u
	SXSjEVojK7juz0uU5g1voxCsES1PC6ifcHBx95x0FFAR6A3k6swZ92i66PYFipbneVcs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oiESd-00075F-1r@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:34:15 +0000

commit f50a2c0e1d057c00d6061f40ae24d068226052ad
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:23:51 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:23:51 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9ce2123c42..dbdf4f6dd1 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -265,6 +265,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8cca19ef84..ec2fc678fa 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1187,6 +1187,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1229,11 +1230,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1303,9 +1325,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:34:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420170.664754 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESo-0000ZV-3O; Tue, 11 Oct 2022 12:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420170.664754; Tue, 11 Oct 2022 12:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESo-0000ZN-0k; Tue, 11 Oct 2022 12:34:26 +0000
Received: by outflank-mailman (input) for mailman id 420170;
 Tue, 11 Oct 2022 12:34:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESn-0000ZB-6Y
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESn-0001PH-5p
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESn-00076D-4w
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=YcPEvKjKill4Gio/soXJ7H9HhflS+aHQ0nRA9lHGaCc=; b=RPCTaBy8jbEDuS/oAQkOl06JgY
	52csYxaIXmylOgreufMAgQRqURgKDt/MM955OF3slDqtX5S0gTKx+8vqD+XcSBVY6iE9xS73/4JeJ
	JMPhHNa3KHxGgOkOt98Ia6IQAJft0ncYioeikDjdxiMFDCDrvzFWt2m/WwV1ronMyLF4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oiESn-00076D-4w@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:34:25 +0000

commit e7aa55c0aab36d994bf627c92bd5386ae167e16e
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:24:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:24:21 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 42 +++++++++++++++++++++++++----------------
 xen/arch/x86/mm/shadow/common.c | 12 ++++++++++++
 3 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 41e1e3f272..a5d2d66852 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2406,12 +2405,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index dbdf4f6dd1..d058050d63 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -546,24 +547,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -578,6 +563,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -603,6 +590,7 @@ void hap_vcpu_teardown(struct vcpu *v)
 void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -611,6 +599,28 @@ void hap_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         hap_vcpu_teardown(v);
 
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d); /* Keep various asserts happy */
 
     if ( d->arch.paging.hap.total_pages != 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ec2fc678fa..64ca18b393 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2831,8 +2831,17 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d);
 
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2953,6 +2962,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:34:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:34:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420172.664758 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESy-0000dG-4p; Tue, 11 Oct 2022 12:34:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420172.664758; Tue, 11 Oct 2022 12:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiESy-0000d9-2A; Tue, 11 Oct 2022 12:34:36 +0000
Received: by outflank-mailman (input) for mailman id 420172;
 Tue, 11 Oct 2022 12:34:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESx-0000cx-A5
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESx-0001PS-9D
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiESx-000771-8E
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=kpseUlh9ZeMlkObnLK3yi5ITc0gx/NrCKBsB663i7KQ=; b=kqD55VlfaI/cPAzxRRSCETYRM2
	A+W8vS0ZrDL5Mu3MaFN7CwLRtIZEnpQq2inm5Peyko0pXtPYDmkebPGVkalRtIe9f4Q5xNXGk54+W
	hp7Rtzk5Rc1ZcoXEvlXM+c+6jlAEWhVPmV2sl3mYRef/NDkZofdmIOmom1ZZUAMU6oyw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oiESx-000771-8E@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:34:35 +0000

commit 8a2111250b424edc49c65c4d41b276766d30635c
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:24:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:24:48 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    ----
    Changes since v12:
        - Correct altp2m preemption check placement.
    
    Changes since v9:
        - Integrate patch into series.
    
    Changes since v2:
        - Rework the loop doing the preemption
        - Add a comment in shadow_enable() to explain why p2m_teardown()
          doesn't need to be preemptible.
    
    Changes since v1:
        - Update the commit message
        - Rebase on top of Roger's v8 series
        - Fix preemption check
        - Use 'unsigned int' rather than 'unsigned long' for the counter
---
 xen/arch/x86/include/asm/p2m.h  |  2 +-
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m-basic.c     | 19 ++++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 4 files changed, 42 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h
index bafbd96052..bd684d02f3 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -600,7 +600,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add/remove a page to/from a domain's p2m table. */
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d058050d63..f809ea9aa6 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -548,17 +548,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -612,14 +612,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d); /* Keep various asserts happy */
 
diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c
index 3231aaa9ba..47b780d6d6 100644
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -23,6 +23,7 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/event.h>
 #include <xen/types.h>
 #include <asm/p2m.h>
 #include "mm-locks.h"
@@ -154,11 +155,12 @@ int p2m_init(struct domain *d)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 {
 #ifdef CONFIG_HVM
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if ( !p2m )
         return;
@@ -180,8 +182,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 64ca18b393..d985d51614 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2776,8 +2776,12 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
 #ifdef CONFIG_HVM
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
 #endif
     if ( rv != 0 && pg != NULL )
     {
@@ -2831,7 +2835,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2952,7 +2958,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:34:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:34:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420173.664762 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiET8-0000gs-7z; Tue, 11 Oct 2022 12:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420173.664762; Tue, 11 Oct 2022 12:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiET8-0000gk-5C; Tue, 11 Oct 2022 12:34:46 +0000
Received: by outflank-mailman (input) for mailman id 420173;
 Tue, 11 Oct 2022 12:34:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiET7-0000gd-DI
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiET7-0001PW-CZ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiET7-000780-Bc
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=XtWcJvFJrzDxSxOBxc+dOWa+YprCYwN5wIsDzLA0Xos=; b=auIFfnOMGNWwRxo+aKj0DiKm4A
	AH4v71oe31jcXIY3WRqYAVdnx3O5ZaLj0ECcSVfGbeKwC3UCxLVQoyXlvtdU6ozNy5w71GFpV4puE
	vqWjETjE4ilfjs3KwQLcsXCB1q/UOk/EmDJsmDuTS/daErOIT6s9Uk0yq/7NMBdNDRDI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] libxl, docs: Add per-arch extra default paging memory
Message-Id: <E1oiET7-000780-Bc@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:34:45 +0000

commit 156a239ea288972425f967ac807b3cb5b5e14874
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:27 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:37 2022 +0200

    libxl, docs: Add per-arch extra default paging memory
    
    This commit adds a per-arch macro `EXTRA_DEFAULT_PAGING_MEM_MB`
    to the default paging memory size, in order to cover the p2m
    pool for extended regions of a xl-based guest on Arm.
    
    For Arm, the extra default paging memory is 128MB.
    For x86, the extra default paging memory is zero, since there
    are no extended regions on x86.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 docs/man/xl.cfg.5.pod.in        |  5 +++++
 tools/libs/light/libxl_arch.h   | 11 +++++++++++
 tools/libs/light/libxl_create.c |  7 ++++++-
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index b2901e04cf..31e58b73b0 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2725,6 +2725,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is the same as x86 HAP mode, plus 512KB to
+cover the extended regions. Users should adjust this value if bigger
+P2M pool size is needed.
+
 =back
 
 =head2 Device-Model Options
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 03b89929e6..247cca130f 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -99,10 +99,21 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
 #define ACPI_INFO_PHYSICAL_ADDRESS 0xfc000000
+#define EXTRA_DEFAULT_PAGING_MEM_MB 0
 
 int libxl__dom_load_acpi(libxl__gc *gc,
                          const libxl_domain_build_info *b_info,
                          struct xc_dom_image *dom);
+
+#else
+
+/*
+ * 128MB extra default paging memory on Arm for extended regions. This
+ * value is normally enough for domains that are not running backend.
+ * See the `shadow_memory` in xl.cfg documentation for more information.
+ */
+#define EXTRA_DEFAULT_PAGING_MEM_MB 128
+
 #endif
 
 #endif
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index b9dd2deedf..612eacfc7f 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1035,12 +1035,17 @@ unsigned long libxl__get_required_paging_memory(unsigned long maxmem_kb,
      * plus 1 page per MiB of RAM for the P2M map (for non-PV guests),
      * plus 1 page per MiB of RAM to shadow the resident processes (for shadow
      * mode guests).
+     * plus 1 page per MiB of RAM for the architecture specific
+     * EXTRA_DEFAULT_PAGING_MEM_MB. On x86, this value is zero. On Arm, this
+     * value is 128 MiB to cover domain extended regions (enough for domains
+     * that are not running backend).
      * This is higher than the minimum that Xen would allocate if no value
      * were given (but the Xen minimum is for safety, not performance).
      */
     return 4 * (256 * smp_cpus +
                 ((type != LIBXL_DOMAIN_TYPE_PV) + !hap) *
-                (maxmem_kb / 1024));
+                (maxmem_kb / 1024) +
+                EXTRA_DEFAULT_PAGING_MEM_MB);
 }
 
 static unsigned long libxl__get_required_iommu_memory(unsigned long maxmem_kb)
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:34:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:34:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420174.664766 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETI-0000jq-9L; Tue, 11 Oct 2022 12:34:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420174.664766; Tue, 11 Oct 2022 12:34:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETI-0000jj-6h; Tue, 11 Oct 2022 12:34:56 +0000
Received: by outflank-mailman (input) for mailman id 420174;
 Tue, 11 Oct 2022 12:34:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETH-0000ja-GG
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETH-0001Ph-FX
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETH-00078q-Ej
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:34:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=PoWsYIfnqCai9IYH2+MOjwuOASnma1nZkteIz8vV44M=; b=MDtgUjlozuDvYIrHzvPOSWL+jH
	Zu3alCqP7NAVPtl7Nvidq+jDtjxMX+dla87Z9wiuJlP0e4NFg7M6/H7EoYvlRNvT7gSgJhyIZ2S1s
	XbW3vvpSaoygeevT42V+g5SzpyLgRWchIGvk2PZBEVDniAtvzWrfQIB5zi5Wr0ZdEhBQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oiETH-00078q-Ej@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:34:55 +0000

commit 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:28 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:39 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/include/asm/domain.h | 10 +++++
 xen/arch/arm/include/asm/p2m.h    |  4 ++
 xen/arch/arm/p2m.c                | 88 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 26a8348eed..2ce6764322 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -53,6 +53,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -64,6 +72,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index a15ea67f9b..42bfd548c4 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -218,6 +218,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index b445f4d754..db385fe410 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -44,6 +44,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1623,7 +1709,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:35:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:35:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420175.664770 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETS-0000n4-Al; Tue, 11 Oct 2022 12:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420175.664770; Tue, 11 Oct 2022 12:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETS-0000mw-88; Tue, 11 Oct 2022 12:35:06 +0000
Received: by outflank-mailman (input) for mailman id 420175;
 Tue, 11 Oct 2022 12:35:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETR-0000mj-J9
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETR-0001Q4-IU
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETR-0007A0-He
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=bDrmpq/3l6dRh5w/iuH5E0DKktc2GVXiwf5fPBDs9kE=; b=pIPl3pXOFAvRGs7Dwrhy+LbYr2
	a8j37ZF3HLQBDq0IorIjloxILzLfFnMfLW8D2yp04CCyLIYNyDLl+7vyvL7sX4VbM8C9e3Z2IcPua
	h8OfN+NsyZB6Ly6HxTvK5bnkrMs/fWSxXCBfKiTAT0em+EFzWAUAM9s5hazPqAK2wvTA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oiETR-0007A0-He@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:35:05 +0000

commit cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:29 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:42 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 tools/libs/light/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c        | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 1a3ac1646e..2a5e93c284 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -209,6 +209,18 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               libxl__domain_build_state *state,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1baf25c3d9..9bf72e6930 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -47,11 +47,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:35:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420176.664774 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETc-0000qb-Cd; Tue, 11 Oct 2022 12:35:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420176.664774; Tue, 11 Oct 2022 12:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETc-0000qU-9g; Tue, 11 Oct 2022 12:35:16 +0000
Received: by outflank-mailman (input) for mailman id 420176;
 Tue, 11 Oct 2022 12:35:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETb-0000qO-MT
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETb-0001QV-Ln
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETb-0007Ar-Kt
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=8EICWta4k8Id5cQvK1jMcIXaTEPUBkDquqv5iNz4RbQ=; b=BcIbb+Rx3c8AjyfhZAiX2BXUPZ
	IDaLdZX6M36NEJf+dbMIzvIewc+I77va9Tgcx8ghqzhRa8znf5SeHf/hDqhwKVDz0XhcRFRDYhHZ9
	dTJ2wSdl9oV9km16g/kmcsNk4/4r8893Tc8tPvluCiZ1pShUw8ZjfejKjZo+SC7cdeTo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oiETb-0007Ar-Kt@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:35:15 +0000

commit cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:30 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:44 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index c47a05e0da..87eaa3e254 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -215,6 +215,14 @@ with the following properties:
     In the future other possible property values might be added to
     enable only selected interfaces.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 746ad3438a..2c84e6dbbb 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1002,6 +1002,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1067,6 +1068,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 40e3c2e119..db97536fe8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3622,6 +3622,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -3733,6 +3748,8 @@ static int __init construct_domU(struct domain *d,
     const char *dom0less_enhanced;
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -3742,6 +3759,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9bf72e6930..c8fdeb1240 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -50,6 +50,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -66,9 +69,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index db385fe410..f17500ddf3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -44,6 +44,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -747,7 +795,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -877,7 +925,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -901,7 +949,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1665,7 +1713,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1689,6 +1737,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:35:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420177.664778 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETm-0000u7-FV; Tue, 11 Oct 2022 12:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420177.664778; Tue, 11 Oct 2022 12:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETm-0000tz-Ci; Tue, 11 Oct 2022 12:35:26 +0000
Received: by outflank-mailman (input) for mailman id 420177;
 Tue, 11 Oct 2022 12:35:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETl-0000tr-PI
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETl-0001Qf-Od
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETl-0007Bg-Nx
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ahAE+TmCrJgjG6iNWCM8XBgZiGuL74XF65f6wcouWow=; b=Tc9ZvVM3r+eeCGy0TZxzv2ANgO
	WwL2a4nR6PYPDN/ybsXYf7kqOZVIMXRsyU3rC1Yeh99uNCInz7/A15oycgMKDxMQ5vumUn6JAKdpc
	ztmaLZ/5CSFOP3MtVj1ZShVr/kQzaJrHBBHdsKIhbCwqWTAw3yElFLvTIZgg+7yWI/dU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oiETl-0007Bg-Nx@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:35:25 +0000

commit 6e3aab858eef614a21a782a3b73acc88e74690ea
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:29:30 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:29:30 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/grant_table.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index fba329dcc2..ee7cc496b8 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2622,9 +2622,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2661,11 +2660,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
             reduce_status_for_pin(rd, act, status, readonly);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 12:35:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 12:35:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420178.664782 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETx-0000xh-H6; Tue, 11 Oct 2022 12:35:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420178.664782; Tue, 11 Oct 2022 12:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiETx-0000xZ-EI; Tue, 11 Oct 2022 12:35:37 +0000
Received: by outflank-mailman (input) for mailman id 420178;
 Tue, 11 Oct 2022 12:35:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETv-0000xH-Sn
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETv-0001Qq-S4
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiETv-0007Ch-RD
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 12:35:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=TGtJV8UJTEnJkbPwX7a1imZ4stcdudJlV19UJOrTLIU=; b=r79SpC7NzRFBg3m41dknWER5ja
	O+kJuZGNsF/dAvWBgZFdX2sqEs0829xXtKONdwmfEgiiTt5GLaDbO9js4NyqRNIO85xUF49J6KyhT
	UxIbjRv6pmTf3gKjIlQxbBFqOp7wgEb7LGz9qV48cJeO1aJ3CDK11Go/3kpjONDWRQuE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86emul: respect NSCB
Message-Id: <E1oiETv-0007Ch-RD@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 12:35:35 +0000

commit 87a20c98d9f0f422727fe9b4b9e22c2c43a5cd9c
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:30:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:30:41 2022 +0200

    x86emul: respect NSCB
    
    protmode_load_seg() would better adhere to that "feature" of clearing
    base (and limit) during NULL selector loads.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/x86_emulate/x86_emulate.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index f6778dd493..e38f98b547 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1970,6 +1970,7 @@ amd_like(const struct x86_emulate_ctxt *ctxt)
 #define vcpu_has_tbm()         (ctxt->cpuid->extd.tbm)
 #define vcpu_has_clzero()      (ctxt->cpuid->extd.clzero)
 #define vcpu_has_wbnoinvd()    (ctxt->cpuid->extd.wbnoinvd)
+#define vcpu_has_nscb()        (ctxt->cpuid->extd.nscb)
 
 #define vcpu_has_bmi1()        (ctxt->cpuid->feat.bmi1)
 #define vcpu_has_hle()         (ctxt->cpuid->feat.hle)
@@ -2102,7 +2103,7 @@ protmode_load_seg(
         case x86_seg_tr:
             goto raise_exn;
         }
-        if ( !_amd_like(cp) || !ops->read_segment ||
+        if ( !_amd_like(cp) || vcpu_has_nscb() || !ops->read_segment ||
              ops->read_segment(seg, sreg, ctxt) != X86EMUL_OKAY )
             memset(sreg, 0, sizeof(*sreg));
         else
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:11:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420214.664808 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2H-00071N-LQ; Tue, 11 Oct 2022 13:11:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420214.664808; Tue, 11 Oct 2022 13:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2H-00071F-IY; Tue, 11 Oct 2022 13:11:05 +0000
Received: by outflank-mailman (input) for mailman id 420214;
 Tue, 11 Oct 2022 13:11:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2G-000719-S1
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2G-0002B4-Nm
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2G-00018g-Mm
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=0Z8DpJj9hvHqXN3cGmRMdOwSUz7CxzmcqQCM1dLHHTo=; b=68184acioLzSJWD0ZPyeE/hJ0C
	ye59piX5UPUSX5rPnVYuEZVHxP2QIXLGyWXeaIZ1T29oIy8o6YpY3a4T7/G+OvUscvA5a99YTVGYT
	UjugxTx1ZFHUYo1CdF0jfKWlFTbHkus3Kv1sGCNHLoRKL48CbV/Q/l0pIj5MEDZIsI7g=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] update Xen version to 4.16.3-pre
Message-Id: <E1oiF2G-00018g-Mm@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:11:04 +0000

commit 4aa32912ebeda8cb94d1c3941e7f1f0a2d4f921b
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:49:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:49:41 2022 +0200

    update Xen version to 4.16.3-pre
---
 xen/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/Makefile b/xen/Makefile
index 76d0a3ff25..8a403ee896 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -2,7 +2,7 @@
 # All other places this is stored (eg. compile.h) should be autogenerated.
 export XEN_VERSION       = 4
 export XEN_SUBVERSION    = 16
-export XEN_EXTRAVERSION ?= .2$(XEN_VENDORVERSION)
+export XEN_EXTRAVERSION ?= .3-pre$(XEN_VENDORVERSION)
 export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
 -include xen-version
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:11:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420215.664811 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2R-00073R-Mi; Tue, 11 Oct 2022 13:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420215.664811; Tue, 11 Oct 2022 13:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2R-00073K-K6; Tue, 11 Oct 2022 13:11:15 +0000
Received: by outflank-mailman (input) for mailman id 420215;
 Tue, 11 Oct 2022 13:11:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2Q-000737-Rc
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2Q-0002Bb-Qo
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2Q-00019I-Pu
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=wwZi1wW68v434TvVC3pKKOVkwNoJQznMFBkEg9tuk+0=; b=qTUS9YtsyGnQ3MTMGp6zMruKV+
	Y36NubU4gaFhIaQ0w4l76JZPxdqFGKrN8jOb+5ZynN6vuWPsS23DYbQFSsUh5zaeCPrjlY0vePUEK
	QBsQKvohCJYFhlrhHXa8JvAYRv4QVC7YT0scJ/Lfs2ou0EPhLFX2UcKrSoq6NFjlwlvA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oiF2Q-00019I-Pu@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:11:14 +0000

commit 8d9531a3421dad2b0012e09e6f41d5274e162064
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:52:13 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:13 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 3349b464a3..1affdafadb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1093,6 +1093,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1610,6 +1619,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:11:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420216.664816 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2b-00076U-P6; Tue, 11 Oct 2022 13:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420216.664816; Tue, 11 Oct 2022 13:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2b-00076K-MG; Tue, 11 Oct 2022 13:11:25 +0000
Received: by outflank-mailman (input) for mailman id 420216;
 Tue, 11 Oct 2022 13:11:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2a-00076E-Uq
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2a-0002Bl-U3
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2a-00019u-T0
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=X8vnekLBUnde8/uVoEYRxjW10n4Gk+K6vtZAo21gvbQ=; b=eyYjVsK6vx8jMxT6+3FQXDO6t4
	xFsaZkvIk1gqCx3zw4gWaSKIB0+y7DBxu6wcRqPrd0Uo7T/YeIvl2A4CnfJchw/GRodqy1uG/FBG5
	9FTAFxZ6LDV8Bka5GI/XJwmjXfYqI9rOeMsXL8MaSs1YKGbNVmhOGw+JsSXjxnGWfgog=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oiF2a-00019u-T0@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:11:24 +0000

commit 937fdbad5180440888f1fcee46299103327efa90
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:52:27 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:27 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c     | 10 ++++++++--
 xen/arch/arm/p2m.c        | 47 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/p2m.h | 13 +++++++++++--
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 96e1b23550..2694c39127 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -789,10 +789,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -996,6 +996,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1056,6 +1057,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1affdafadb..27418ee5ee 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1527,17 +1527,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8f11d9c97b..b3ba83283e 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -192,8 +192,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:11:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:11:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420217.664820 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2l-000791-Qu; Tue, 11 Oct 2022 13:11:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420217.664820; Tue, 11 Oct 2022 13:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2l-00078t-Nq; Tue, 11 Oct 2022 13:11:35 +0000
Received: by outflank-mailman (input) for mailman id 420217;
 Tue, 11 Oct 2022 13:11:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2l-00078e-22
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2l-0002Bz-1E
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2l-0001AO-0D
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=wBkytvAJZ1hCcLIHlGZGfAqDckHT4AyenuHOPgeeW2w=; b=jZ4qbevnnDd2BxDuneQbYbmgvt
	8pLQTObj9z2yt8e7tc+2ORFLGL/Pl4Jld8qnXNoUntkWrUOwPflIL4Sa/GcOFRiGuWaitx15nrd1J
	HmNG2mcIsOLp5v1wZJ3jC0rod5eUeiisLdrfOLajZIJwOOog73MB3hCixyN+g2ufbymU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oiF2l-0001AO-0D@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:11:35 +0000

commit 8fc19c143b8aa563077f3d5c46fcc0a54dc04f35
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:52:39 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:39 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 47a7487fa7..a8f5a19da9 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -541,18 +541,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index def1695cf0..aba4f17cbe 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -749,11 +749,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -763,10 +763,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8c1b041f71..8c5baba954 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2701,7 +2701,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2866,7 +2866,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index f2af7a746c..c3c16748e7 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -574,7 +574,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:11:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420218.664824 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2v-0007C2-Tm; Tue, 11 Oct 2022 13:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420218.664824; Tue, 11 Oct 2022 13:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF2v-0007Bs-R7; Tue, 11 Oct 2022 13:11:45 +0000
Received: by outflank-mailman (input) for mailman id 420218;
 Tue, 11 Oct 2022 13:11:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2v-0007Bi-4x
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2v-0002Dh-49
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF2v-0001BU-3K
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RtocDs7yfL1bwuKPyhieMPUk7AJ1CWBx7vExd2MpQ4k=; b=BuHhub7i1PCUE9N7UxC2BS8Lcp
	1Ngdm40xN86dyOY4bBGYj48WJASVT8oes0ekd8UJpjISjB2XWROGdbPrO/0yuvAMBHtrBHc1HypDe
	j+8lPVs7Qkr4BydqUqllvuiAWh/t+PpVMkMDkbeQRsarU9o10faDkZluo+z9Ob3rKkc0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oiF2v-0001BU-3K@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:11:45 +0000

commit 3422c19d85a3d23a9d798eafb739ffb8865522d2
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:52:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:59 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a8f5a19da9..d75dc2b9ed 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -766,6 +772,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -774,6 +783,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:11:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:11:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420219.664829 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF36-0007EZ-0N; Tue, 11 Oct 2022 13:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420219.664829; Tue, 11 Oct 2022 13:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF35-0007ER-Sa; Tue, 11 Oct 2022 13:11:55 +0000
Received: by outflank-mailman (input) for mailman id 420219;
 Tue, 11 Oct 2022 13:11:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF35-0007EI-8C
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF35-0002Dr-7Q
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF35-0001C1-6b
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:11:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=1iQ2aaZegvoXHz1hLZnx5DWKJR6tjGiOzWS8KpWmpqo=; b=Q7X84XG5duZkR0gLeeiBURHJOK
	evhP3JZau1lHkj1/pF7bU+Q9r4gYq62DobADkJIDIPW9fEb9NbAqE1YbxubTmor94SxChEDbOyzh8
	iEDNjr0yQxp4+BJORPMYVbP0tGfkMFlTfBkV5wyluKtiwm6FVWqBXrvQO2FSt+jr6IOk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oiF35-0001C1-6b@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:11:55 +0000

commit 40e9daf6b56ae49bda3ba4e254ccf0e998e52a8c
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:53:12 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:53:12 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/common.c |  1 +
 xen/arch/x86/mm/shadow/multi.c  | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8c5baba954..00e520cbd0 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2516,6 +2516,7 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 7b8f4dd13b..2ff78fe336 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3312,6 +3312,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
@@ -3370,6 +3375,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #else
 #error This should never happen
 #endif
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:12:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420220.664832 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3G-0007JK-0i; Tue, 11 Oct 2022 13:12:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420220.664832; Tue, 11 Oct 2022 13:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3F-0007JA-UP; Tue, 11 Oct 2022 13:12:05 +0000
Received: by outflank-mailman (input) for mailman id 420220;
 Tue, 11 Oct 2022 13:12:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3F-0007J4-BR
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3F-0002EF-Aa
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3F-0001Cr-9p
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ENSQectc50TaoXjdWRnSljDQu24GrBn9k0DaHR6rcsI=; b=DMSaJP4MDFpCKuh9ialLW9pUmq
	hCDW8sVHjEq/36jwPSguYhjiK0ISoBOlLmxoWZ7rWh9F7BAd7KhgFqJlnDeVbwpjpckaNLJbncf2n
	mTnBPcnXkrrijaJDh20J2UBzSTe+LLVVOaAhZZoopAQ+eadL729y/4AwAZucdGw+AQ20=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oiF3F-0001Cr-9p@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:12:05 +0000

commit 28d3f677ec97c98154311f64871ac48762cf980a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:53:27 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:53:27 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 69 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/hvm.c     |  4 ++-
 xen/arch/x86/mm/shadow/multi.c   | 11 +++++--
 xen/arch/x86/mm/shadow/private.h |  3 +-
 4 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 00e520cbd0..2067c7d16b 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -928,14 +929,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -951,7 +953,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -974,7 +977,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -987,7 +990,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -995,9 +1003,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1218,7 +1236,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1236,16 +1254,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1336,7 +1356,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2334,12 +2356,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2403,6 +2426,9 @@ static void sh_update_paging_modes(struct vcpu *v)
             mfn_t mmfn = sh_make_monitor_table(
                              v, v->arch.paging.mode->shadow.shadow_levels);
 
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2441,6 +2467,12 @@ static void sh_update_paging_modes(struct vcpu *v)
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = sh_make_monitor_table(
                               v, v->arch.paging.mode->shadow.shadow_levels);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    sh_destroy_monitor_table(v, old_mfn,
+                                             old_mode->shadow.shadow_levels);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
@@ -2526,7 +2558,12 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index d5f42102a0..a0878d9ad7 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -700,7 +700,9 @@ mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels)
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
+
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
     mfn_to_page(m4mfn)->shadow_flags = 4;
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 2ff78fe336..c07af0bd99 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2440,9 +2440,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 35efb1b984..738214f75e 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -383,7 +383,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:12:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420221.664836 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3Q-0007Lq-2T; Tue, 11 Oct 2022 13:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420221.664836; Tue, 11 Oct 2022 13:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3P-0007Li-Vy; Tue, 11 Oct 2022 13:12:15 +0000
Received: by outflank-mailman (input) for mailman id 420221;
 Tue, 11 Oct 2022 13:12:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3P-0007La-Ea
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3P-0002Ej-Dj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3P-0001DS-Cp
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=lxMtv332KAlLiQxFTBuk5uxh3lPEADcO+hA/LsWEYjo=; b=Y0ZJqGAzbNUe2sPLSWY2vI+hVv
	teoQijGgMJXnsmB6v1RiCG7LKvPdtTnKbXTR+eHYibKCGn+SoeUUTR4SG+EBJGwuN1fV0u6msGi7r
	DyjrtF24LXPuvw/F4Z1O6/jdWGRuosbes17yzU7UPmlQ6ZW2UL9w8T4clgZPmfcezuAs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oiF3P-0001DS-Cp@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:12:15 +0000

commit 745e0b300dc3f5000e6d48c273b405d4bcc29ba7
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:53:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:53:41 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d75dc2b9ed..787991233e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -245,6 +245,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -281,7 +284,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 2067c7d16b..9807f6ec6c 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -939,6 +939,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -991,7 +995,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1005,10 +1009,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1238,6 +1245,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:12:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420222.664840 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3b-0007R4-42; Tue, 11 Oct 2022 13:12:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420222.664840; Tue, 11 Oct 2022 13:12:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3b-0007Qu-1D; Tue, 11 Oct 2022 13:12:27 +0000
Received: by outflank-mailman (input) for mailman id 420222;
 Tue, 11 Oct 2022 13:12:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3Z-0007Qa-Hf
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3Z-0002Ew-Gq
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3Z-0001Ds-G7
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=1yWGj8vN2HSRg0esXyiNPrvbmamAIlLuZizj2F14kiw=; b=GiOEDgRuR9MVBXTLB1+i0Ipzho
	UslvtOGWZLn+Rz1BLS2/rucDiEuybR2dRHDrzKibwy7BmXlmoqm0b2YlwPv2Ya39B6ScMGeUPnhuf
	qsQ7BsFrn+1TCg16t3BdJ3Pe5SOi0ZrKWSpHNyDYoxobcukcVMDMbfGYuuz+SISzAgH4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oiF3Z-0001Ds-G7@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:12:25 +0000

commit 943635d8f8486209e4e48966507ad57963e96284
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:54:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:54:00 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 787991233e..aef2297450 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -265,6 +265,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 9807f6ec6c..9eb33eafc7 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1187,6 +1187,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1229,11 +1230,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1303,9 +1325,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:12:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420223.664844 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3l-0007UG-6x; Tue, 11 Oct 2022 13:12:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420223.664844; Tue, 11 Oct 2022 13:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3l-0007U9-45; Tue, 11 Oct 2022 13:12:37 +0000
Received: by outflank-mailman (input) for mailman id 420223;
 Tue, 11 Oct 2022 13:12:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3j-0007Tu-L1
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3j-0002FB-KE
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3j-0001EP-JB
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=v9vPJtYGSW6pCw3gh7mmV5mpf9dDy3O7ojWyUOeSMPg=; b=rl7faiJhhnUZSYaQXftjEy3vmH
	ZqqfKaq8GGEQ/FR+ruZILmp9dg4MRiz1ZO3yOeu1uIYGpxlaURCn2vKzELl3dX8iyhrf+hfscjsKu
	VU5S9be72XX6r6WH4cJER98PlLtRNMSRa28SiVgds0vtElC9jKqY4Ay42Re6cwMo4LNc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oiF3j-0001EP-JB@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:12:35 +0000

commit f5959ed715e19cf2844656477dbf74c2f576c9d4
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:54:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:54:21 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 42 +++++++++++++++++++++++++----------------
 xen/arch/x86/mm/shadow/common.c | 12 ++++++++++++
 3 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 0d39981550..a4356893bd 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2381,12 +2380,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index aef2297450..a44fcfd95e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -546,24 +547,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -578,6 +563,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -603,6 +590,7 @@ void hap_vcpu_teardown(struct vcpu *v)
 void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -611,6 +599,28 @@ void hap_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         hap_vcpu_teardown(v);
 
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d); /* Keep various asserts happy */
 
     if ( d->arch.paging.hap.total_pages != 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 9eb33eafc7..ac9a1ae078 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2824,8 +2824,17 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d);
 
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2946,6 +2955,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:12:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:12:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420224.664848 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3v-0007Wz-8S; Tue, 11 Oct 2022 13:12:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420224.664848; Tue, 11 Oct 2022 13:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF3v-0007Wr-5Z; Tue, 11 Oct 2022 13:12:47 +0000
Received: by outflank-mailman (input) for mailman id 420224;
 Tue, 11 Oct 2022 13:12:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3t-0007Wc-Nw
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3t-0002FL-NH
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF3t-0001F7-Ma
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+qwJ517EvcCFlHwWBhZfp+FqbDsaRWtOqA9wl49VVTw=; b=aqtRwzjzgZ+vDrD8bbYWiVk/4r
	J9CfBM5ELw70x9ur/OHgUlP7AV+Mp24+i7lPoFrYQsXdvZiqdU1zZTtyMbZNwZBwH3tukBJLznCAw
	IQ/VgYQ73BJwfuK31BxGu78iT66+GqIuWDXCxvNbwaE6SbVkOMBxIenjWIj1D+D+qfbM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oiF3t-0001F7-Ma@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:12:45 +0000

commit a603386b422f5cb4c5e2639a7e20a1d99dba2175
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:54:44 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:54:44 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a44fcfd95e..1f9a157a0c 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -548,17 +548,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -612,14 +612,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d); /* Keep various asserts happy */
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index aba4f17cbe..8781df9dda 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -749,12 +749,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -773,8 +774,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ac9a1ae078..3b0d781991 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2770,8 +2770,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2824,7 +2828,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2945,7 +2951,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index c3c16748e7..2db9ab0122 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -574,7 +574,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:12:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:12:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420225.664853 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF45-0007Zp-AG; Tue, 11 Oct 2022 13:12:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420225.664853; Tue, 11 Oct 2022 13:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF45-0007Zf-6z; Tue, 11 Oct 2022 13:12:57 +0000
Received: by outflank-mailman (input) for mailman id 420225;
 Tue, 11 Oct 2022 13:12:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF43-0007ZR-R4
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF43-0002FV-QH
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF43-0001Fk-Pd
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:12:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=paDAqM9Pz6pIr+ThVMccqyGKjugALpJc1DJVDDBXkS8=; b=2x4mKbP0erbxpgyN1ypM/DnU1B
	P5DYdhX+1Fv/5Wwan5XCqJH/MQ8CAot209Wo0vNhDiuAnTYXPwD16wdya04qdRXXlB6/BscJFzhWX
	qW7/M9WHD1+guXtCYkSoLohHmbdWhUBMMgt0J0hxp0J6PAfDe2rmS/eMqLUwvqRktFlc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1oiF43-0001Fk-Pd@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:12:55 +0000

commit 755a9b52844de3e1e47aa1fc9991a4240ccfbf35
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:08 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:08 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in       |  5 +++++
 tools/libs/light/libxl_arch.h  |  4 ++++
 tools/libs/light/libxl_arm.c   | 14 ++++++++++++++
 tools/libs/light/libxl_utils.c |  9 ++-------
 tools/libs/light/libxl_x86.c   | 13 +++++++++++++
 5 files changed, 38 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index b98d161398..eda1e77ebd 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1768,6 +1768,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map and additional 512KB for extended regions. Users should
+adjust this value if bigger P2M pool size is needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 1522ecb97f..5a060c2c30 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -90,6 +90,10 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index eef1de0939..73a95e83af 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -154,6 +154,20 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of extended region. This default value is 128 MiB
+     * which should be enough for domains that are not running backend.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024 + 128);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index 4699c4a0a3..e276c0ee9c 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 1feadebb18..51362893cf 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -882,6 +882,19 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                     libxl_defbool_val(src->b_info.arch_x86.msr_relaxed));
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
+
 /*
  * Local variables:
  * mode: C
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:13:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420226.664856 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4F-0007cg-C1; Tue, 11 Oct 2022 13:13:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420226.664856; Tue, 11 Oct 2022 13:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4F-0007cY-8a; Tue, 11 Oct 2022 13:13:07 +0000
Received: by outflank-mailman (input) for mailman id 420226;
 Tue, 11 Oct 2022 13:13:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4D-0007cC-Tv
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4D-0002Fs-TD
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4D-0001GY-SZ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=6uOvIu1vn0EnTmOgzMJ6B5B1XQ1i+kQiGaQrjsjZih8=; b=z4TIGlrFdKJ8pNyf+NZas2nIB7
	SbPUZyrm7s7Al32S4HQqknZ+D6mL2QCZO9T1h+tWwuwtfGDjlQ6kFi0+jhiHVfKO5MxKmmyQNpE3Z
	MRleap/RQ6hvxklTssRtU1MD2KFxjleUaorRrYknsXGcmE/Am6vrNM3Y8e0haZgLwrcE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oiF4D-0001GY-SZ@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:13:05 +0000

commit 914fc8e8b4cc003e90d51bee0aef54687358530a
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:21 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 27418ee5ee..d8957dd872 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1599,7 +1685,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7f8ddd3f5c..2f31795ab9 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -40,6 +40,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -51,6 +59,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index b3ba83283e..c9598740bd 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -218,6 +218,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:13:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:13:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420227.664860 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4P-0007fr-FX; Tue, 11 Oct 2022 13:13:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420227.664860; Tue, 11 Oct 2022 13:13:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4P-0007fj-Ct; Tue, 11 Oct 2022 13:13:17 +0000
Received: by outflank-mailman (input) for mailman id 420227;
 Tue, 11 Oct 2022 13:13:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4O-0007fY-0r
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4O-0002GC-04
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:16 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4N-0001HE-VM
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=lrrRbwyLSkkEC+LmnqM72SkO/zNJMmn5ntEJyE32nKk=; b=kWfzTS+XMUJhTuQiUMQ84Rnzsp
	6uLMYXDxebvHs5USyvHTSdHxk/FTYi4Doll1qy9HrLI9/6uW3T4lpQp7TFP2NzCr2RNik+jBys1rL
	A5TUm5wv5dkBfKC9nt+LQ4iZ6ShCxvRvf0PsZCtiB45SmcL5N5OA6wjtJcok/wUyd5u8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oiF4N-0001HE-VM@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:13:15 +0000

commit 3a16da801e14b8ff996b6f7408391ce488abd925
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:40 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:40 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libs/light/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c        | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 73a95e83af..22a0c561bb 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -131,6 +131,18 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               libxl__domain_build_state *state,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1baf25c3d9..9bf72e6930 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -47,11 +47,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:13:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:13:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420228.664864 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4Z-0007j6-HH; Tue, 11 Oct 2022 13:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420228.664864; Tue, 11 Oct 2022 13:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4Z-0007iw-EN; Tue, 11 Oct 2022 13:13:27 +0000
Received: by outflank-mailman (input) for mailman id 420228;
 Tue, 11 Oct 2022 13:13:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4Y-0007in-3u
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4Y-0002GH-3D
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:26 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4Y-0001Hd-2W
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:26 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=d9X88khntg/yB4j3oA29XNph8Nu/8Xx38OBZGvKwXEA=; b=x15Dqh1dI9o8zsPYe0gxhBFh6s
	KdRx9MLAtZdI1UtnA1NXLtLoREYqRs3ZeewQLFlMxuowDL8T/H9/vU4UlLvEwlh/jMSj5G4QcQ6n5
	GFoAZqz/C0tTduOS/0W5oQEAEJMUKepLbJzIJDFAU2o2UusDpsSPcvmNBSeMxfv3EzfA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oiF4Y-0001Hd-2W@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:13:26 +0000

commit 44e9dcc48b81bca202a5b31926125a6a59a4c72e
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:53 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 71895663a4..d92ccc56ff 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -182,6 +182,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2694c39127..a818f33a1a 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -997,6 +997,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1062,6 +1063,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d02bacbcd1..8aec3755ca 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2833,6 +2833,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2924,6 +2939,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2933,6 +2950,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9bf72e6930..c8fdeb1240 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -50,6 +50,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -66,9 +69,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d8957dd872..b2d856a801 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -751,7 +799,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -878,7 +926,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -902,7 +950,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1641,7 +1689,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1665,6 +1713,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:13:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420229.664868 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4j-0007lh-J7; Tue, 11 Oct 2022 13:13:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420229.664868; Tue, 11 Oct 2022 13:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4j-0007lY-Ft; Tue, 11 Oct 2022 13:13:37 +0000
Received: by outflank-mailman (input) for mailman id 420229;
 Tue, 11 Oct 2022 13:13:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4i-0007lB-6s
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4i-0002GU-6D
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:36 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4i-0001I4-5W
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:36 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=17qHJ+YB5bgsp0A4v77XhjRnRMEgJnQR3MjmFrvixJY=; b=541zQmWKkgKe17vwX1Rl6+7GSL
	mmE16ALJF6MuSGINhtwa8OKPgFWkJF2vcMUZk77KFO+4Jpts1pKYAf80Ad5Ya1md8ZlziUBFLGJq4
	B4p79Zq/wMuKxmedjS5nfrJptJqEM+GP5mD+rb1zyJVaXpkmktPJyVmO6LpYMhGfhADg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oiF4i-0001I4-5W@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:13:36 +0000

commit 32cb81501c8b858fe9a451650804ec3024a8b364
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:56:29 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:56:29 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 4c742cd8fe..d8ca645b96 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2613,9 +2613,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2652,11 +2651,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
             reduce_status_for_pin(rd, act, status, readonly);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:13:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:13:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420230.664873 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4t-0007oY-KG; Tue, 11 Oct 2022 13:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420230.664873; Tue, 11 Oct 2022 13:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF4t-0007oQ-HO; Tue, 11 Oct 2022 13:13:47 +0000
Received: by outflank-mailman (input) for mailman id 420230;
 Tue, 11 Oct 2022 13:13:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4s-0007oG-AJ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4s-0002Ge-9d
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:46 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF4s-0001IT-8c
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:46 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=5ld2xN+b+NYpVsTMGpcSZYOH/v3ypb21U2sSI7kto0o=; b=Utptj7HTP+Jh457QUgM7q0c8ik
	T3YceSpDaZq3WfwuDG3e15KXE9wEemTKcrlhpdsRwqgHDVT4eu/XfSW84G+oexg6MCTWajwRAv/jI
	AJ8gXmIZ8glM+teLH4NR0rU3C/WC3Wj7YDoWwnBTj0MGbU/ZdzPJSN9L7kD+ssZ+WszI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] tools/libxl: Replace deprecated -soundhw on QEMU command line
Message-Id: <E1oiF4s-0001IT-8c@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:13:46 +0000

commit e85e2a3c17b6cd38de041cdaf14d9efdcdabad1a
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Tue Oct 11 14:59:10 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:59:10 2022 +0200

    tools/libxl: Replace deprecated -soundhw on QEMU command line
    
    -soundhw is deprecated since 825ff02911c9 ("audio: add soundhw
    deprecation notice"), QEMU v5.1, and is been remove for upcoming v7.1
    by 039a68373c45 ("introduce -audio as a replacement for -soundhw").
    
    Instead we can just add the sound card with "-device", for most option
    that "-soundhw" could handle. "-device" is an option that existed
    before QEMU 1.0, and could already be used to add audio hardware.
    
    The list of possible option for libxl's "soundhw" is taken the list
    from QEMU 7.0.
    
    The list of options for "soundhw" are listed in order of preference in
    the manual. The first three (hda, ac97, es1370) are PCI devices and
    easy to test on Linux, and the last four are ISA devices which doesn't
    seems to work out of the box on linux.
    
    The sound card 'pcspk' isn't listed even if it used to be accepted by
    '-soundhw' because QEMU crash when trying to add it to a Xen domain.
    Also, it wouldn't work with "-device" might need to be "-machine
    pcspk-audiodev=default" instead.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    master commit: 62ca138c2c052187783aca3957d3f47c4dcfd683
    master date: 2022-08-18 09:25:50 +0200
---
 docs/man/xl.cfg.5.pod.in                  |  6 +++---
 tools/libs/light/libxl_dm.c               | 19 ++++++++++++++++++-
 tools/libs/light/libxl_types_internal.idl | 10 ++++++++++
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index eda1e77ebd..ab7541f22c 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2545,9 +2545,9 @@ The form serial=DEVICE is also accepted for backwards compatibility.
 
 =item B<soundhw="DEVICE">
 
-Select the virtual sound card to expose to the guest. The valid
-devices are defined by the device model configuration, please see the
-B<qemu(1)> manpage for details. The default is not to export any sound
+Select the virtual sound card to expose to the guest. The valid devices are
+B<hda>, B<ac97>, B<es1370>, B<adlib>, B<cs4231a>, B<gus>, B<sb16> if there are
+available with the device model QEMU. The default is not to export any sound
 device.
 
 =item B<vkb_device=BOOLEAN>
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 04bf5d8563..fc264a3a13 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1204,6 +1204,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     uint64_t ram_size;
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
+    int rc;
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1531,7 +1532,23 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
         if (b_info->u.hvm.soundhw) {
-            flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, NULL);
+            libxl__qemu_soundhw soundhw;
+
+            rc = libxl__qemu_soundhw_from_string(b_info->u.hvm.soundhw, &soundhw);
+            if (rc) {
+                LOGD(ERROR, guest_domid, "Unknown soundhw option '%s'", b_info->u.hvm.soundhw);
+                return ERROR_INVAL;
+            }
+
+            switch (soundhw) {
+            case LIBXL__QEMU_SOUNDHW_HDA:
+                flexarray_vappend(dm_args, "-device", "intel-hda",
+                                  "-device", "hda-duplex", NULL);
+                break;
+            default:
+                flexarray_append_pair(dm_args, "-device",
+                                      (char*)libxl__qemu_soundhw_to_string(soundhw));
+            }
         }
         if (!libxl__acpi_defbool_val(b_info)) {
             flexarray_append(dm_args, "-no-acpi");
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21dbb..caa08d3229 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -55,3 +55,13 @@ libxl__device_action = Enumeration("device_action", [
     (1, "ADD"),
     (2, "REMOVE"),
     ])
+
+libxl__qemu_soundhw = Enumeration("qemu_soundhw", [
+    (1, "ac97"),
+    (2, "adlib"),
+    (3, "cs4231a"),
+    (4, "es1370"),
+    (5, "gus"),
+    (6, "hda"),
+    (7, "sb16"),
+    ])
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:13:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:13:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420231.664878 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF53-0007r4-Mo; Tue, 11 Oct 2022 13:13:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420231.664878; Tue, 11 Oct 2022 13:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF53-0007qu-Ir; Tue, 11 Oct 2022 13:13:57 +0000
Received: by outflank-mailman (input) for mailman id 420231;
 Tue, 11 Oct 2022 13:13:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF52-0007qb-DA
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF52-0002Gi-CQ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:56 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF52-0001Kk-Be
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:13:56 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=1zZj182lMLXmGTwmf0oB6MImt1bI2s/V26sQP2IUdIo=; b=69YHubuuGH2zvr41GoL1GLD0CG
	RVqDBGNnTsqAGNolM1gY27EljK/X52hQld25QR6KgkwwWgi+BK6tWiPhS+J7Oq+6A6N3+DDXHPd4F
	IQW2JOsdyyQFTHZMA37LzAdRTdvkMg+SEH5HUhv8FyUKOGslnAsKVprat+MBKB0S9OVU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
Message-Id: <E1oiF52-0001Kk-Be@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:13:56 +0000

commit e8882bcfe35520e950ba60acd6e67e65f1ce90a8
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:59:26 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:59:26 2022 +0200

    x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
    
    While the SDM isn't very clear about this, our present behavior make
    Linux 5.19 unhappy. As of commit 8ad7e8f69695 ("x86/fpu/xsave: Support
    XSAVEC in the kernel") they're using this CPUID output also to size
    the compacted area used by XSAVEC. Getting back zero there isn't really
    liked, yet for PV that's the default on capable hardware: XSAVES isn't
    exposed to PV domains.
    
    Considering that the size reported is that of the compacted save area,
    I view Linux'es assumption as appropriate (short of the SDM properly
    considering the case). Therefore we need to populate the field also when
    only XSAVEC is supported for a guest.
    
    Fixes: 460b9a4b3630 ("x86/xsaves: enable xsaves/xrstors for hvm guest")
    Fixes: 8d050ed1097c ("x86: don't expose XSAVES capability to PV guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: c3bd0b83ea5b7c0da6542687436042eeea1e7909
    master date: 2022-08-24 14:23:59 +0200
---
 xen/arch/x86/cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index ff335f1639..a647331f47 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1060,7 +1060,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         switch ( subleaf )
         {
         case 1:
-            if ( p->xstate.xsaves )
+            if ( p->xstate.xsavec || p->xstate.xsaves )
             {
                 /*
                  * TODO: Figure out what to do for XSS state.  VT-x manages
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:14:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420232.664879 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5D-0007uY-P5; Tue, 11 Oct 2022 13:14:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420232.664879; Tue, 11 Oct 2022 13:14:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5D-0007uR-MZ; Tue, 11 Oct 2022 13:14:07 +0000
Received: by outflank-mailman (input) for mailman id 420232;
 Tue, 11 Oct 2022 13:14:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5C-0007uE-GF
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5C-0002H1-Fa
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:06 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5C-0001MI-Ep
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:06 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=D+pxOYy9XEqZHpMRcI6asD8J06SIWyAtpc8t74Wk/ps=; b=bp/q2v7nkWHu1591WLhrx3/Rmw
	LuCTKadeljlr9gt6A4Cbfuzz7dQn4bVOfcIcGSqJfmZ5g8nhbJuGfau92THxndjPTLxgglci42LuV
	ZUtDa6rDTD1RgAiFt5MexjTDA+YT3SAF4d8Y2HCzvzVS4ng3xOs9oTkHdcxc6F/7MinY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/sched: introduce cpupool_update_node_affinity()
Message-Id: <E1oiF5C-0001MI-Ep@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:14:06 +0000

commit d4e971ad12dd27913dffcf96b5de378ea7b476e1
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 14:59:40 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:59:40 2022 +0200

    xen/sched: introduce cpupool_update_node_affinity()
    
    For updating the node affinities of all domains in a cpupool add a new
    function cpupool_update_node_affinity().
    
    In order to avoid multiple allocations of cpumasks carve out memory
    allocation and freeing from domain_update_node_affinity() into new
    helpers, which can be used by cpupool_update_node_affinity().
    
    Modify domain_update_node_affinity() to take an additional parameter
    for passing the allocated memory in and to allocate and free the memory
    via the new helpers in case NULL was passed.
    
    This will help later to pre-allocate the cpumasks in order to avoid
    allocations in stop-machine context.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a83fa1e2b96ace65b45dde6954d67012633a082b
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 54 +++++++++++++++++++++++++++++++---------------
 xen/common/sched/cpupool.c | 39 ++++++++++++++++++---------------
 xen/common/sched/private.h |  7 ++++++
 xen/include/xen/sched.h    |  9 +++++++-
 4 files changed, 74 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index f07bd2681f..065a83eca9 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1824,9 +1824,28 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
     return ret;
 }
 
-void domain_update_node_affinity(struct domain *d)
+bool alloc_affinity_masks(struct affinity_masks *affinity)
 {
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
+    if ( !alloc_cpumask_var(&affinity->hard) )
+        return false;
+    if ( !alloc_cpumask_var(&affinity->soft) )
+    {
+        free_cpumask_var(affinity->hard);
+        return false;
+    }
+
+    return true;
+}
+
+void free_affinity_masks(struct affinity_masks *affinity)
+{
+    free_cpumask_var(affinity->soft);
+    free_cpumask_var(affinity->hard);
+}
+
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity)
+{
+    struct affinity_masks masks;
     cpumask_t *dom_affinity;
     const cpumask_t *online;
     struct sched_unit *unit;
@@ -1836,14 +1855,16 @@ void domain_update_node_affinity(struct domain *d)
     if ( !d->vcpu || !d->vcpu[0] )
         return;
 
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
+    if ( !affinity )
     {
-        free_cpumask_var(dom_cpumask);
-        return;
+        affinity = &masks;
+        if ( !alloc_affinity_masks(affinity) )
+            return;
     }
 
+    cpumask_clear(affinity->hard);
+    cpumask_clear(affinity->soft);
+
     online = cpupool_domain_master_cpumask(d);
 
     spin_lock(&d->node_affinity_lock);
@@ -1864,22 +1885,21 @@ void domain_update_node_affinity(struct domain *d)
          */
         for_each_sched_unit ( d, unit )
         {
-            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-                       unit->cpu_soft_affinity);
+            cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affinity);
+            cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affinity);
         }
         /* Filter out non-online cpus */
-        cpumask_and(dom_cpumask, dom_cpumask, online);
-        ASSERT(!cpumask_empty(dom_cpumask));
+        cpumask_and(affinity->hard, affinity->hard, online);
+        ASSERT(!cpumask_empty(affinity->hard));
         /* And compute the intersection between hard, online and soft */
-        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
+        cpumask_and(affinity->soft, affinity->soft, affinity->hard);
 
         /*
          * If not empty, the intersection of hard, soft and online is the
          * narrowest set we want. If empty, we fall back to hard&online.
          */
-        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-                           dom_cpumask : dom_cpumask_soft;
+        dom_affinity = cpumask_empty(affinity->soft) ? affinity->hard
+                                                     : affinity->soft;
 
         nodes_clear(d->node_affinity);
         for_each_cpu ( cpu, dom_affinity )
@@ -1888,8 +1908,8 @@ void domain_update_node_affinity(struct domain *d)
 
     spin_unlock(&d->node_affinity_lock);
 
-    free_cpumask_var(dom_cpumask_soft);
-    free_cpumask_var(dom_cpumask);
+    if ( affinity == &masks )
+        free_affinity_masks(affinity);
 }
 
 typedef long ret_t;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 8c6e6eb9cc..45b6ff9956 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -401,6 +401,25 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
     return ret;
 }
 
+/* Update affinities of all domains in a cpupool. */
+static void cpupool_update_node_affinity(const struct cpupool *c)
+{
+    struct affinity_masks masks;
+    struct domain *d;
+
+    if ( !alloc_affinity_masks(&masks) )
+        return;
+
+    rcu_read_lock(&domlist_read_lock);
+
+    for_each_domain_in_cpupool(d, c)
+        domain_update_node_aff(d, &masks);
+
+    rcu_read_unlock(&domlist_read_lock);
+
+    free_affinity_masks(&masks);
+}
+
 /*
  * assign a specific cpu to a cpupool
  * cpupool_lock must be held
@@ -408,7 +427,6 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
-    struct domain *d;
     const cpumask_t *cpus;
 
     cpus = sched_get_opt_cpumask(c->gran, cpu);
@@ -433,12 +451,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    rcu_read_lock(&domlist_read_lock);
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return 0;
 }
@@ -447,18 +460,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
-    struct domain *d;
     int ret;
 
     if ( c != cpupool_cpu_moving )
         return -EADDRNOTAVAIL;
 
-    /*
-     * We need this for scanning the domain list, both in
-     * cpu_disable_scheduler(), and at the bottom of this function.
-     */
     rcu_read_lock(&domlist_read_lock);
     ret = cpu_disable_scheduler(cpu);
+    rcu_read_unlock(&domlist_read_lock);
 
     rcu_read_lock(&sched_res_rculock);
     cpus = get_sched_res(cpu)->cpus;
@@ -485,11 +494,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return ret;
 }
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index a870320146..2b04b01a0c 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -593,6 +593,13 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step,
         cpumask_copy(mask, unit->cpu_hard_affinity);
 }
 
+struct affinity_masks {
+    cpumask_var_t hard;
+    cpumask_var_t soft;
+};
+
+bool alloc_affinity_masks(struct affinity_masks *affinity);
+void free_affinity_masks(struct affinity_masks *affinity);
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 9671062360..3f4225738a 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -655,8 +655,15 @@ static inline void get_knownalive_domain(struct domain *d)
     ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
 }
 
+struct affinity_masks;
+
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
-void domain_update_node_affinity(struct domain *d);
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity);
+
+static inline void domain_update_node_affinity(struct domain *d)
+{
+    domain_update_node_aff(d, NULL);
+}
 
 /*
  * To be implemented by each architecture, sanity checking the configuration
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:14:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:14:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420233.664884 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5N-0007x9-Ql; Tue, 11 Oct 2022 13:14:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420233.664884; Tue, 11 Oct 2022 13:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5N-0007x2-OA; Tue, 11 Oct 2022 13:14:17 +0000
Received: by outflank-mailman (input) for mailman id 420233;
 Tue, 11 Oct 2022 13:14:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5M-0007ws-JT
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5M-0002HO-Ii
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:16 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5M-0001N5-Hv
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:16 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=FyvhI3wTzBI3dv2bb47AlsPOKCAbXS4ZKBAUYXhLJM0=; b=q/+kR1uS4PC9ChfeNO5xR4hyMY
	nWVjGysT/6WO70Gx1DEOXsw7wJNQ2BbZXYf/YLd76C8VYSnQdN89ava8nHiZGURr7wSL+C+X7YT8n
	aiE9QvqsLsJoAhGbfZ46A/ZKtp+6g+tPoHd6RgtWRt/vhdWtFyAIO4mUdiw4fw0zkUbY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
Message-Id: <E1oiF5M-0001N5-Hv@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:14:16 +0000

commit c377ceab0a007690a1e71c81a5232613c99e944d
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:00:05 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:05 2022 +0200

    xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
    
    In order to prepare not allocating or freeing memory from
    schedule_cpu_rm(), move this functionality to dedicated functions.
    
    For now call those functions from schedule_cpu_rm().
    
    No change of behavior expected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d42be6f83480b3ada286dc18444331a816be88a3
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 143 +++++++++++++++++++++++++++------------------
 xen/common/sched/private.h |  11 ++++
 2 files changed, 98 insertions(+), 56 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 065a83eca9..2decb1161a 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3221,6 +3221,75 @@ out:
     return ret;
 }
 
+/*
+ * Allocate all memory needed for free_cpu_rm_data(), as allocations cannot
+ * be made in stop_machine() context.
+ *
+ * Between alloc_cpu_rm_data() and the real cpu removal action the relevant
+ * contents of struct sched_resource can't change, as the cpu in question is
+ * locked against any other movement to or from cpupools, and the data copied
+ * by alloc_cpu_rm_data() is modified only in case the cpu in question is
+ * being moved from or to a cpupool.
+ */
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+{
+    struct cpu_rm_data *data;
+    const struct sched_resource *sr;
+    unsigned int idx;
+
+    rcu_read_lock(&sched_res_rculock);
+
+    sr = get_sched_res(cpu);
+    data = xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);
+    if ( !data )
+        goto out;
+
+    data->old_ops = sr->scheduler;
+    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
+    data->ppriv_old = sr->sched_priv;
+
+    for ( idx = 0; idx < sr->granularity - 1; idx++ )
+    {
+        data->sr[idx] = sched_alloc_res();
+        if ( data->sr[idx] )
+        {
+            data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
+            if ( !data->sr[idx]->sched_unit_idle )
+            {
+                sched_res_free(&data->sr[idx]->rcu);
+                data->sr[idx] = NULL;
+            }
+        }
+        if ( !data->sr[idx] )
+        {
+            while ( idx > 0 )
+                sched_res_free(&data->sr[--idx]->rcu);
+            XFREE(data);
+            goto out;
+        }
+
+        data->sr[idx]->curr = data->sr[idx]->sched_unit_idle;
+        data->sr[idx]->scheduler = &sched_idle_ops;
+        data->sr[idx]->granularity = 1;
+
+        /* We want the lock not to change when replacing the resource. */
+        data->sr[idx]->schedule_lock = sr->schedule_lock;
+    }
+
+ out:
+    rcu_read_unlock(&sched_res_rculock);
+
+    return data;
+}
+
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
+{
+    sched_free_udata(mem->old_ops, mem->vpriv_old);
+    sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+
+    xfree(mem);
+}
+
 /*
  * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops
  * (the idle scheduler).
@@ -3229,53 +3298,23 @@ out:
  */
 int schedule_cpu_rm(unsigned int cpu)
 {
-    void *ppriv_old, *vpriv_old;
-    struct sched_resource *sr, **sr_new = NULL;
+    struct sched_resource *sr;
+    struct cpu_rm_data *data;
     struct sched_unit *unit;
-    struct scheduler *old_ops;
     spinlock_t *old_lock;
     unsigned long flags;
-    int idx, ret = -ENOMEM;
+    int idx = 0;
     unsigned int cpu_iter;
 
+    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        return -ENOMEM;
+
     rcu_read_lock(&sched_res_rculock);
 
     sr = get_sched_res(cpu);
-    old_ops = sr->scheduler;
-
-    if ( sr->granularity > 1 )
-    {
-        sr_new = xmalloc_array(struct sched_resource *, sr->granularity - 1);
-        if ( !sr_new )
-            goto out;
-        for ( idx = 0; idx < sr->granularity - 1; idx++ )
-        {
-            sr_new[idx] = sched_alloc_res();
-            if ( sr_new[idx] )
-            {
-                sr_new[idx]->sched_unit_idle = sched_alloc_unit_mem();
-                if ( !sr_new[idx]->sched_unit_idle )
-                {
-                    sched_res_free(&sr_new[idx]->rcu);
-                    sr_new[idx] = NULL;
-                }
-            }
-            if ( !sr_new[idx] )
-            {
-                for ( idx--; idx >= 0; idx-- )
-                    sched_res_free(&sr_new[idx]->rcu);
-                goto out;
-            }
-            sr_new[idx]->curr = sr_new[idx]->sched_unit_idle;
-            sr_new[idx]->scheduler = &sched_idle_ops;
-            sr_new[idx]->granularity = 1;
 
-            /* We want the lock not to change when replacing the resource. */
-            sr_new[idx]->schedule_lock = sr->schedule_lock;
-        }
-    }
-
-    ret = 0;
+    ASSERT(sr->granularity);
     ASSERT(sr->cpupool != NULL);
     ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus));
     ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid));
@@ -3283,10 +3322,6 @@ int schedule_cpu_rm(unsigned int cpu)
     /* See comment in schedule_cpu_add() regarding lock switching. */
     old_lock = pcpu_schedule_lock_irqsave(cpu, &flags);
 
-    vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
-    ppriv_old = sr->sched_priv;
-
-    idx = 0;
     for_each_cpu ( cpu_iter, sr->cpus )
     {
         per_cpu(sched_res_idx, cpu_iter) = 0;
@@ -3300,27 +3335,27 @@ int schedule_cpu_rm(unsigned int cpu)
         else
         {
             /* Initialize unit. */
-            unit = sr_new[idx]->sched_unit_idle;
-            unit->res = sr_new[idx];
+            unit = data->sr[idx]->sched_unit_idle;
+            unit->res = data->sr[idx];
             unit->is_running = true;
             sched_unit_add_vcpu(unit, idle_vcpu[cpu_iter]);
             sched_domain_insert_unit(unit, idle_vcpu[cpu_iter]->domain);
 
             /* Adjust cpu masks of resources (old and new). */
             cpumask_clear_cpu(cpu_iter, sr->cpus);
-            cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus);
+            cpumask_set_cpu(cpu_iter, data->sr[idx]->cpus);
             cpumask_set_cpu(cpu_iter, &sched_res_mask);
 
             /* Init timer. */
-            init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
+            init_timer(&data->sr[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
 
             /* Last resource initializations and insert resource pointer. */
-            sr_new[idx]->master_cpu = cpu_iter;
-            set_sched_res(cpu_iter, sr_new[idx]);
+            data->sr[idx]->master_cpu = cpu_iter;
+            set_sched_res(cpu_iter, data->sr[idx]);
 
             /* Last action: set the new lock pointer. */
             smp_mb();
-            sr_new[idx]->schedule_lock = &sched_free_cpu_lock;
+            data->sr[idx]->schedule_lock = &sched_free_cpu_lock;
 
             idx++;
         }
@@ -3336,16 +3371,12 @@ int schedule_cpu_rm(unsigned int cpu)
     /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */
     spin_unlock_irqrestore(old_lock, flags);
 
-    sched_deinit_pdata(old_ops, ppriv_old, cpu);
-
-    sched_free_udata(old_ops, vpriv_old);
-    sched_free_pdata(old_ops, ppriv_old, cpu);
+    sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
-out:
     rcu_read_unlock(&sched_res_rculock);
-    xfree(sr_new);
+    free_cpu_rm_data(data, cpu);
 
-    return ret;
+    return 0;
 }
 
 struct scheduler *scheduler_get_default(void)
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 2b04b01a0c..e286849a13 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -600,6 +600,15 @@ struct affinity_masks {
 
 bool alloc_affinity_masks(struct affinity_masks *affinity);
 void free_affinity_masks(struct affinity_masks *affinity);
+
+/* Memory allocation related data for schedule_cpu_rm(). */
+struct cpu_rm_data {
+    const struct scheduler *old_ops;
+    void *ppriv_old;
+    void *vpriv_old;
+    struct sched_resource *sr[];
+};
+
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
@@ -608,6 +617,8 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
 int schedule_cpu_rm(unsigned int cpu);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:14:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:14:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420234.664889 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5X-00080I-Tf; Tue, 11 Oct 2022 13:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420234.664889; Tue, 11 Oct 2022 13:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5X-00080A-Q7; Tue, 11 Oct 2022 13:14:27 +0000
Received: by outflank-mailman (input) for mailman id 420234;
 Tue, 11 Oct 2022 13:14:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5W-000800-MT
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5W-0002HT-Lk
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:26 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5W-0001OV-Kw
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:26 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=vDPnGYnbEBKWNzftJtPKp/eFIFBSOiHAjokIecEb8rc=; b=VVJDy654KZNTYNjFT3pKl2a7et
	0TQ+Ewf8Zf7+JGv+tUCJphr3dqR2riqzDFCCdyN+9veVna3G4Qc93aPKmoaaSoS1+Wuv4LS29zV4E
	woh8AfxsPMBxwaI0GmjQlImYI2izkJNL5czYadSrknzS8V/AKtlt9O77VAQrV4IvQmXk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/sched: fix cpu hotplug
Message-Id: <E1oiF5W-0001OV-Kw@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:14:26 +0000

commit 4f3204c2bc66db18c61600dd3e08bf1fd9584a1b
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:00:19 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:19 2022 +0200

    xen/sched: fix cpu hotplug
    
    Cpu unplugging is calling schedule_cpu_rm() via stop_machine_run() with
    interrupts disabled, thus any memory allocation or freeing must be
    avoided.
    
    Since commit 5047cd1d5dea ("xen/common: Use enhanced
    ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced
    via an assertion, which will now fail.
    
    Fix this by allocating needed memory before entering stop_machine_run()
    and freeing any memory only after having finished stop_machine_run().
    
    Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_cpu_[add/rm]()")
    Reported-by: Gao Ruifeng <ruifeng.gao@intel.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d84473689611eed32fd90b27e614f28af767fa3f
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 25 +++++++++++++----
 xen/common/sched/cpupool.c | 69 ++++++++++++++++++++++++++++++++++++----------
 xen/common/sched/private.h |  5 ++--
 3 files changed, 77 insertions(+), 22 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 2decb1161a..900aab8f66 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3231,7 +3231,7 @@ out:
  * by alloc_cpu_rm_data() is modified only in case the cpu in question is
  * being moved from or to a cpupool.
  */
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc)
 {
     struct cpu_rm_data *data;
     const struct sched_resource *sr;
@@ -3244,6 +3244,17 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
     if ( !data )
         goto out;
 
+    if ( aff_alloc )
+    {
+        if ( !alloc_affinity_masks(&data->affinity) )
+        {
+            XFREE(data);
+            goto out;
+        }
+    }
+    else
+        memset(&data->affinity, 0, sizeof(data->affinity));
+
     data->old_ops = sr->scheduler;
     data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
     data->ppriv_old = sr->sched_priv;
@@ -3264,6 +3275,7 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
         {
             while ( idx > 0 )
                 sched_res_free(&data->sr[--idx]->rcu);
+            free_affinity_masks(&data->affinity);
             XFREE(data);
             goto out;
         }
@@ -3286,6 +3298,7 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
 {
     sched_free_udata(mem->old_ops, mem->vpriv_old);
     sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+    free_affinity_masks(&mem->affinity);
 
     xfree(mem);
 }
@@ -3296,17 +3309,18 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
  * The cpu is already marked as "free" and not valid any longer for its
  * cpupool.
  */
-int schedule_cpu_rm(unsigned int cpu)
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data)
 {
     struct sched_resource *sr;
-    struct cpu_rm_data *data;
     struct sched_unit *unit;
     spinlock_t *old_lock;
     unsigned long flags;
     int idx = 0;
     unsigned int cpu_iter;
+    bool free_data = !data;
 
-    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        data = alloc_cpu_rm_data(cpu, false);
     if ( !data )
         return -ENOMEM;
 
@@ -3374,7 +3388,8 @@ int schedule_cpu_rm(unsigned int cpu)
     sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-    free_cpu_rm_data(data, cpu);
+    if ( free_data )
+        free_cpu_rm_data(data, cpu);
 
     return 0;
 }
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 45b6ff9956..b5a948639a 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -402,22 +402,28 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 }
 
 /* Update affinities of all domains in a cpupool. */
-static void cpupool_update_node_affinity(const struct cpupool *c)
+static void cpupool_update_node_affinity(const struct cpupool *c,
+                                         struct affinity_masks *masks)
 {
-    struct affinity_masks masks;
+    struct affinity_masks local_masks;
     struct domain *d;
 
-    if ( !alloc_affinity_masks(&masks) )
-        return;
+    if ( !masks )
+    {
+        if ( !alloc_affinity_masks(&local_masks) )
+            return;
+        masks = &local_masks;
+    }
 
     rcu_read_lock(&domlist_read_lock);
 
     for_each_domain_in_cpupool(d, c)
-        domain_update_node_aff(d, &masks);
+        domain_update_node_aff(d, masks);
 
     rcu_read_unlock(&domlist_read_lock);
 
-    free_affinity_masks(&masks);
+    if ( masks == &local_masks )
+        free_affinity_masks(masks);
 }
 
 /*
@@ -451,15 +457,17 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, NULL);
 
     return 0;
 }
 
-static int cpupool_unassign_cpu_finish(struct cpupool *c)
+static int cpupool_unassign_cpu_finish(struct cpupool *c,
+                                       struct cpu_rm_data *mem)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
+    struct affinity_masks *masks = mem ? &mem->affinity : NULL;
     int ret;
 
     if ( c != cpupool_cpu_moving )
@@ -482,7 +490,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
      */
     if ( !ret )
     {
-        ret = schedule_cpu_rm(cpu);
+        ret = schedule_cpu_rm(cpu, mem);
         if ( ret )
             cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus);
         else
@@ -494,7 +502,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, masks);
 
     return ret;
 }
@@ -558,7 +566,7 @@ static long cpupool_unassign_cpu_helper(void *info)
                       cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu);
     spin_lock(&cpupool_lock);
 
-    ret = cpupool_unassign_cpu_finish(c);
+    ret = cpupool_unassign_cpu_finish(c, NULL);
 
     spin_unlock(&cpupool_lock);
     debugtrace_printk("cpupool_unassign_cpu ret=%ld\n", ret);
@@ -701,7 +709,7 @@ static int cpupool_cpu_add(unsigned int cpu)
  * This function is called in stop_machine context, so we can be sure no
  * non-idle vcpu is active on the system.
  */
-static void cpupool_cpu_remove(unsigned int cpu)
+static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem)
 {
     int ret;
 
@@ -709,7 +717,7 @@ static void cpupool_cpu_remove(unsigned int cpu)
 
     if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
     {
-        ret = cpupool_unassign_cpu_finish(cpupool0);
+        ret = cpupool_unassign_cpu_finish(cpupool0, mem);
         BUG_ON(ret);
     }
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
@@ -775,7 +783,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu)
         {
             ret = cpupool_unassign_cpu_start(c, master_cpu);
             BUG_ON(ret);
-            ret = cpupool_unassign_cpu_finish(c);
+            ret = cpupool_unassign_cpu_finish(c, NULL);
             BUG_ON(ret);
         }
     }
@@ -993,12 +1001,24 @@ void dump_runq(unsigned char key)
 static int cpu_callback(
     struct notifier_block *nfb, unsigned long action, void *hcpu)
 {
+    static struct cpu_rm_data *mem;
+
     unsigned int cpu = (unsigned long)hcpu;
     int rc = 0;
 
     switch ( action )
     {
     case CPU_DOWN_FAILED:
+        if ( system_state <= SYS_STATE_active )
+        {
+            if ( mem )
+            {
+                free_cpu_rm_data(mem, cpu);
+                mem = NULL;
+            }
+            rc = cpupool_cpu_add(cpu);
+        }
+        break;
     case CPU_ONLINE:
         if ( system_state <= SYS_STATE_active )
             rc = cpupool_cpu_add(cpu);
@@ -1006,12 +1026,31 @@ static int cpu_callback(
     case CPU_DOWN_PREPARE:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
+        {
             rc = cpupool_cpu_remove_prologue(cpu);
+            if ( !rc )
+            {
+                ASSERT(!mem);
+                mem = alloc_cpu_rm_data(cpu, true);
+                rc = mem ? 0 : -ENOMEM;
+            }
+        }
         break;
     case CPU_DYING:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
-            cpupool_cpu_remove(cpu);
+        {
+            ASSERT(mem);
+            cpupool_cpu_remove(cpu, mem);
+        }
+        break;
+    case CPU_DEAD:
+        if ( system_state <= SYS_STATE_active )
+        {
+            ASSERT(mem);
+            free_cpu_rm_data(mem, cpu);
+            mem = NULL;
+        }
         break;
     case CPU_RESUME_FAILED:
         cpupool_cpu_remove_forced(cpu);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index e286849a13..0126a4bb9e 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -603,6 +603,7 @@ void free_affinity_masks(struct affinity_masks *affinity);
 
 /* Memory allocation related data for schedule_cpu_rm(). */
 struct cpu_rm_data {
+    struct affinity_masks affinity;
     const struct scheduler *old_ops;
     void *ppriv_old;
     void *vpriv_old;
@@ -617,9 +618,9 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc);
 void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
-int schedule_cpu_rm(unsigned int cpu);
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
 void cpupool_put(struct cpupool *pool);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:14:38 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420235.664891 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5h-00083N-Vd; Tue, 11 Oct 2022 13:14:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420235.664891; Tue, 11 Oct 2022 13:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5h-00083G-T1; Tue, 11 Oct 2022 13:14:37 +0000
Received: by outflank-mailman (input) for mailman id 420235;
 Tue, 11 Oct 2022 13:14:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5g-000834-PG
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5g-0002He-OY
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:36 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5g-0001PL-Nm
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:36 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=PIMEWUFKtfBalAII5/WzXkfk2UnwGgKaGMZ94qFrH1M=; b=XhEe3X1NO9E3/o8noFIPQavWiL
	4j+uajhBjzjnWuGytfNVmTnkfV2L2Q8gbKSswecE2q6Cs7cBI9yN84XPpfhM5dZQ1on0MGLq1TYH1
	8JMAJL2pi/OM01d5/ZPxUdLlOgV5DRQ1Lf1IAF66g/wUh7WCeGFsI4OJjHxUd3jva/X8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
Message-Id: <E1oiF5g-0001PL-Nm@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:14:36 +0000

commit 2b694dd2932be78431b14257f23b738f2fc8f6a1
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:00:33 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:33 2022 +0200

    Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
    
    I haven't been able to find evidence of "-nopie" ever having been a
    supported compiler option. The correct spelling is "-no-pie".
    Furthermore like "-pie" this is an option which is solely passed to the
    linker. The compiler only recognizes "-fpie" / "-fPIE" / "-fno-pie", and
    it doesn't infer these options from "-pie" / "-no-pie".
    
    Add the compiler recognized form, but for the possible case of the
    variable also being used somewhere for linking keep the linker option as
    well (with corrected spelling).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    
    Build: Drop -no-pie from EMBEDDED_EXTRA_CFLAGS
    
    This breaks all Clang builds, as demostrated by Gitlab CI.
    
    Contrary to the description in ecd6b9759919, -no-pie is not even an option
    passed to the linker.  GCC's actual behaviour is to inhibit the passing of
    -pie to the linker, as well as selecting different cr0 artefacts to be linked.
    
    EMBEDDED_EXTRA_CFLAGS is not used for $(CC)-doing-linking, and not liable to
    gain such a usecase.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>
    Fixes: ecd6b9759919 ("Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS")
    master commit: ecd6b9759919fa6335b0be1b5fc5cce29a30c4f1
    master date: 2022-09-08 09:25:26 +0200
    master commit: 13a7c0074ac8fb31f6c0485429b7a20a1946cb22
    master date: 2022-09-27 15:40:42 -0700
---
 Config.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 46de3cd1e0..6f95067b8d 100644
--- a/Config.mk
+++ b/Config.mk
@@ -197,7 +197,7 @@ endif
 APPEND_LDFLAGS += $(foreach i, $(APPEND_LIB), -L$(i))
 APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
 
-EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
+EMBEDDED_EXTRA_CFLAGS := -fno-pie -fno-stack-protector -fno-stack-protector-all
 EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
 
 XEN_EXTFILES_URL ?= http://xenbits.xen.org/xen-extfiles
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:14:48 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:14:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420236.664896 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5s-00086H-18; Tue, 11 Oct 2022 13:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420236.664896; Tue, 11 Oct 2022 13:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF5r-000869-Uk; Tue, 11 Oct 2022 13:14:47 +0000
Received: by outflank-mailman (input) for mailman id 420236;
 Tue, 11 Oct 2022 13:14:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5q-00085x-S2
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5q-0002Ho-RP
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:46 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF5q-0001QE-Qd
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:46 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=/62EFVB9cfgyVpS8xdRR9ZOfUIfemwQfPt9ShxTTruk=; b=C4Dvcl8VamubcYklyqlZvca+l4
	lPutQGirVJuhAjdo9DCLRb8ZKn9wF+GOJWYlcWgrbb4x4HVZx/U8NZrlfhWSRh5kj7E6okDvz0e7s
	NeMaJHvIdwLvuopxlOWbV0aADvsfQ4+F5T8pGim8GqH/Ghg3HAASoyr9tBZeBx8RMkyw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] tools/xenstore: minor fix of the migration stream doc
Message-Id: <E1oiF5q-0001QE-Qd@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:14:46 +0000

commit 49510071ee93905378e54664778760ed3908d447
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:00:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:59 2022 +0200

    tools/xenstore: minor fix of the migration stream doc
    
    Drop mentioning the non-existent read-only socket in the migration
    stream description document.
    
    The related record field was removed in commit 8868a0e3f674 ("docs:
    update the xenstore migration stream documentation).
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: ace1d2eff80d3d66c37ae765dae3e3cb5697e5a4
    master date: 2022-09-08 09:25:58 +0200
---
 docs/designs/xenstore-migration.md | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 5f1155273e..78530bbb0e 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -129,11 +129,9 @@ xenstored state that needs to be restored.
 | `evtchn-fd`    | The file descriptor used to communicate with |
 |                | the event channel driver                     |
 
-xenstored will resume in the original process context. Hence `rw-socket-fd` and
-`ro-socket-fd` simply specify the file descriptors of the sockets. Sockets
-are not always used, however, and so -1 will be used to denote an unused
-socket.
-
+xenstored will resume in the original process context. Hence `rw-socket-fd`
+simply specifies the file descriptor of the socket. Sockets are not always
+used, however, and so -1 will be used to denote an unused socket.
 
 \pagebreak
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:14:59 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:14:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420237.664900 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF63-00089A-2l; Tue, 11 Oct 2022 13:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420237.664900; Tue, 11 Oct 2022 13:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF63-000892-03; Tue, 11 Oct 2022 13:14:59 +0000
Received: by outflank-mailman (input) for mailman id 420237;
 Tue, 11 Oct 2022 13:14:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF60-00088t-V4
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF60-0002Hs-UQ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:56 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF60-0001Qn-Tj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:14:56 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=B9CyPauoi7wJD3gBlg/EA/Cogx4NziJyaH0esniRY9s=; b=cAbsqPKa9I7NHbsfaZZPMFnwdf
	dx6Ij1+AvhRlXDOTg911Rvy8/vzRLSd2e9/KXkfAMSYW6caT4X9OabiPm35BTzjQVXDvEqDHAQjno
	+pcTd88TIOBhhWrVlaaXl8pIA1LjOKdmdw9F7pstJ8rAXyPxwAmsOog9BtSM3Igu4XOQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/gnttab: fix gnttab_acquire_resource()
Message-Id: <E1oiF60-0001Qn-Tj@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:14:56 +0000

commit b9560762392c01b3ee84148c07be8017cb42dbc9
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:01:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:01:22 2022 +0200

    xen/gnttab: fix gnttab_acquire_resource()
    
    Commit 9dc46386d89d ("gnttab: work around "may be used uninitialized"
    warning") was wrong, as vaddrs can legitimately be NULL in case
    XENMEM_resource_grant_table_id_status was specified for a grant table
    v1. This would result in crashes in debug builds due to
    ASSERT_UNREACHABLE() triggering.
    
    Check vaddrs only to be NULL in the rc == 0 case.
    
    Expand the tests in tools/tests/resource to tickle this path, and verify that
    using XENMEM_resource_grant_table_id_status on a v1 grant table fails.
    
    Fixes: 9dc46386d89d ("gnttab: work around "may be used uninitialized" warning")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com> # xen
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 52daa6a8483e4fbd6757c9d1b791e23931791608
    master date: 2022-09-09 16:28:38 +0100
---
 tools/tests/resource/test-resource.c | 15 +++++++++++++++
 xen/common/grant_table.c             |  2 +-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
index 0557f8a1b5..37dfff4dcd 100644
--- a/tools/tests/resource/test-resource.c
+++ b/tools/tests/resource/test-resource.c
@@ -106,6 +106,21 @@ static void test_gnttab(uint32_t domid, unsigned int nr_frames,
     if ( rc )
         return fail("    Fail: Unmap grant table %d - %s\n",
                     errno, strerror(errno));
+
+    /*
+     * Verify that an attempt to map the status frames fails, as the domain is
+     * in gnttab v1 mode.
+     */
+    res = xenforeignmemory_map_resource(
+        fh, domid, XENMEM_resource_grant_table,
+        XENMEM_resource_grant_table_id_status, 0, 1,
+        (void **)&gnttab, PROT_READ | PROT_WRITE, 0);
+
+    if ( res )
+    {
+        fail("    Fail: Managed to map gnttab v2 status frames in v1 mode\n");
+        xenforeignmemory_unmap_resource(fh, res);
+    }
 }
 
 static void test_domain_configurations(void)
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index d8ca645b96..76272b3c8a 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -4142,7 +4142,7 @@ int gnttab_acquire_resource(
      * on non-error paths, and hence it needs setting to NULL at the top of the
      * function.  Leave some runtime safety.
      */
-    if ( !vaddrs )
+    if ( !rc && !vaddrs )
     {
         ASSERT_UNREACHABLE();
         rc = -ENODATA;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:15:09 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:15:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420238.664904 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF6D-0008QO-4d; Tue, 11 Oct 2022 13:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420238.664904; Tue, 11 Oct 2022 13:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF6D-0008QE-1Z; Tue, 11 Oct 2022 13:15:09 +0000
Received: by outflank-mailman (input) for mailman id 420238;
 Tue, 11 Oct 2022 13:15:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF6B-0008Pt-1Y
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:15:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF6B-0002IM-0q
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:15:07 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF6B-0001Rh-08
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:15:07 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=0AZqd8hQmwDNsWELUBqA6zTCQwluugjQWqxCwhZoQTU=; b=1L9/1lBgeQGQyOdj9Vq+BpbGGm
	9gO79PEdsNOxzCnFtps1M0Fzcmii7HefgoImqk4IOEGpTUbUm9RKYNr+O3ND/Lgt8hSwzOZztHXoS
	Ldq1h6d6Qp/rmOv8IWWffJqcERqna7ErD1cPyeaSEGXpGQCBTV5GKCw8cRXk9zRlE9Vs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
Message-Id: <E1oiF6B-0001Rh-08@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:15:07 +0000

commit 3f4da85ca8816f6617529c80850eaddd80ea0f1f
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:01:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:01:36 2022 +0200

    x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
    
    Forever sinced its introduction VCPUOP_register_vcpu_time_memory_area
    was available only to native domains. Linux, for example, would attempt
    to use it irrespective of guest bitness (including in its so called
    PVHVM mode) as long as it finds XEN_PVCLOCK_TSC_STABLE_BIT set (which we
    set only for clocksource=tsc, which in turn needs engaging via command
    line option).
    
    Fixes: a5d39947cb89 ("Allow guests to register secondary vcpu_time_info")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: b726541d94bd0a80b5864d17a2cd2e6d73a3fe0a
    master date: 2022-09-29 14:47:45 +0200
---
 xen/arch/x86/x86_64/domain.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index c46dccc25a..d51d993447 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -54,6 +54,26 @@ arch_compat_vcpu_op(
         break;
     }
 
+    case VCPUOP_register_vcpu_time_memory_area:
+    {
+        struct compat_vcpu_register_time_memory_area area = { .addr.p = 0 };
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.h, arg, 1) )
+            break;
+
+        if ( area.addr.h.c != area.addr.p ||
+             !compat_handle_okay(area.addr.h, 1) )
+            break;
+
+        rc = 0;
+        guest_from_compat_handle(v->arch.time_info_guest, area.addr.h);
+
+        force_update_vcpu_system_time(v);
+
+        break;
+    }
+
     case VCPUOP_get_physid:
         rc = arch_do_vcpu_op(cmd, v, arg);
         break;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:15:18 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420239.664910 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF6M-0008St-7C; Tue, 11 Oct 2022 13:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420239.664910; Tue, 11 Oct 2022 13:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiF6M-0008Sl-3C; Tue, 11 Oct 2022 13:15:18 +0000
Received: by outflank-mailman (input) for mailman id 420239;
 Tue, 11 Oct 2022 13:15:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF6L-0008Sd-4W
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:15:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF6L-0002Io-3s
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:15:17 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiF6L-0001SH-37
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:15:17 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=lzWTebRK0Gp3vaX9J6SGO4Y5bdoI3kCvtJkSrFcmY3Y=; b=MyImeGMMOG8DfHCPhH6r3BieUZ
	jd4/61SZFNjd/aQBZO4mTAz17kULI83QtGtKEYuWRvzvMp6CJKPN0goO+ljGWVshk34sL8tVRzQSR
	aEhT8XJSmrWlw1Q2PK3H7xd/xfpuoG8nEC4ImiXuvV59095vA+03lZcs/BFKA0hucjAg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/vpmu: Fix race-condition in vpmu_load
Message-Id: <E1oiF6L-0001SH-37@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:15:17 +0000

commit 1bce7fb1f702da4f7a749c6f1457ecb20bf74fca
Author:     Tamas K Lengyel <tamas.lengyel@intel.com>
AuthorDate: Tue Oct 11 15:01:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:01:48 2022 +0200

    x86/vpmu: Fix race-condition in vpmu_load
    
    The vPMU code-bases attempts to perform an optimization on saving/reloading the
    PMU context by keeping track of what vCPU ran on each pCPU. When a pCPU is
    getting scheduled, checks if the previous vCPU isn't the current one. If so,
    attempts a call to vpmu_save_force. Unfortunately if the previous vCPU is
    already getting scheduled to run on another pCPU its state will be already
    runnable, which results in an ASSERT failure.
    
    Fix this by always performing a pmu context save in vpmu_save when called from
    vpmu_switch_from, and do a vpmu_load when called from vpmu_switch_to.
    
    While this presents a minimal overhead in case the same vCPU is getting
    rescheduled on the same pCPU, the ASSERT failure is avoided and the code is a
    lot easier to reason about.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: defa4e51d20a143bdd4395a075bf0933bb38a9a4
    master date: 2022-09-30 09:53:49 +0200
---
 xen/arch/x86/cpu/vpmu.c | 42 ++++--------------------------------------
 1 file changed, 4 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 16e91a3694..b6c2ec3cd0 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -368,58 +368,24 @@ void vpmu_save(struct vcpu *v)
     vpmu->last_pcpu = pcpu;
     per_cpu(last_vcpu, pcpu) = v;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v, 0) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
     apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
 }
 
 int vpmu_load(struct vcpu *v, bool_t from_guest)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return 0;
 
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
          (!has_vlapic(vpmu_vcpu(vpmu)->domain) &&
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:22:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420244.664924 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFCv-0001Sd-VS; Tue, 11 Oct 2022 13:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420244.664924; Tue, 11 Oct 2022 13:22:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFCv-0001SV-S7; Tue, 11 Oct 2022 13:22:05 +0000
Received: by outflank-mailman (input) for mailman id 420244;
 Tue, 11 Oct 2022 13:22:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFCu-0001SL-T7
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFCu-0002Wz-Qd
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFCu-0001u9-Pl
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=N2yPI51gzirgdEw/8fyyDl9UaixM5mw3AgyVRUGaOPg=; b=4uSOORYK0Aq+3tPZVLq1sc2Faw
	V02QapqIOcRnoTAqzci6mIN2GTHhH/coKsrwNp/5gjrODv4LfLlKMmOljYjuOZ3hIZps2D3IB8J8M
	NsCZIM5uXXrvt8/WLC+b/ljR4gRhqKA3TmOgQN6X30bsrnCfP6yt9gffhokaIAEC/29U=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oiFCu-0001u9-Pl@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:22:04 +0000

commit 09fc590c15773c2471946a78740c6b02e8c34a45
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:05:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:05:53 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2ddd06801a..8398251c51 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1093,6 +1093,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1613,6 +1622,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:22:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420245.664926 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFD6-0001Wo-0L; Tue, 11 Oct 2022 13:22:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420245.664926; Tue, 11 Oct 2022 13:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFD5-0001Wg-Te; Tue, 11 Oct 2022 13:22:15 +0000
Received: by outflank-mailman (input) for mailman id 420245;
 Tue, 11 Oct 2022 13:22:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFD4-0001WI-Uy
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFD4-0002XT-U9
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFD4-0001vI-T9
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=3tMuKgflZYpQGdoCTraORb4vBvM8gCDjujtKLifys7g=; b=1LwN+HrLauBUdF4U3IDpRN0c+k
	8UBwxHmedK13LH1TTjrb+AFFhrg4oC9uKaVDTYBJu+LJ9cmax90E5hdWi6vHa5tyhl1smU+rKOj/Q
	96xaqvgRvxefGQTlmgOMEFgKpuWEG03I9HkZRGBEVrP4Jnn4Yq1wJtWqm1Wxdnzz2ETI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oiFD4-0001vI-T9@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:22:14 +0000

commit 0d805f9fba4bc155d15047685024f7d842e925e4
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:06:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:06:36 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c     | 10 ++++++++--
 xen/arch/arm/p2m.c        | 47 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/p2m.h | 13 +++++++++++--
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5eaf4c718e..223ec9694d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -779,10 +779,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -984,6 +984,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1038,6 +1039,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8398251c51..4ad3e0606e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1530,17 +1530,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 6a2108398f..3a2d51b35d 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -192,8 +192,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:22:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420249.664931 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDG-0001au-3O; Tue, 11 Oct 2022 13:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420249.664931; Tue, 11 Oct 2022 13:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDG-0001an-0h; Tue, 11 Oct 2022 13:22:26 +0000
Received: by outflank-mailman (input) for mailman id 420249;
 Tue, 11 Oct 2022 13:22:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDF-0001af-24
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDF-0002Xg-1I
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDF-0001vj-0T
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=pg+UsBBDByRH46Ney9gMJYFBl0GrNwDRHIGqXjXnYtM=; b=0A4Vg9ul02OzOO2wBqKY6zltyb
	uwvMRY3BKOkWy8w2qdjeA9NSPkobaE1Ln0HRyj2nquWwi3c6GaWzZmzHgwDn7ucugGkkIuAclcyGS
	6w1PYIFGoSW7diDS0XCgbQJSjdegPACrl3wG7N0NefQkwANnpU7jkXXB/Jqj/aYYkfIg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oiFDF-0001vj-0T@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:22:25 +0000

commit 0f3eab90f327210d91e8e31a769376f286e8819a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:07:25 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:07:25 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 47a7487fa7..a8f5a19da9 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -541,18 +541,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 85681dee26..8ba73082c1 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -741,11 +741,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -755,10 +755,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 4a8882430b..abe6d43343 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2768,7 +2768,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2933,7 +2933,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 46e8b94a49..46eb51d44c 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -619,7 +619,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:22:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420250.664934 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDQ-0001dx-4p; Tue, 11 Oct 2022 13:22:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420250.664934; Tue, 11 Oct 2022 13:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDQ-0001do-2C; Tue, 11 Oct 2022 13:22:36 +0000
Received: by outflank-mailman (input) for mailman id 420250;
 Tue, 11 Oct 2022 13:22:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDP-0001da-5P
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDP-0002Xv-4f
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDP-0001wO-3N
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=wo4OS2GT5dFUe3tpSrjkTrN8d3zM/lk20BkaYflV4b0=; b=SFYX3nboD3q1kHr1BsjS4g6C6B
	S02UkLZTJsNu7eH0mXRzPzBq3VgyqtovCTo7ibnu4YxCzMJwli7CnxGt+pZk3FmXVX2TTwv2EAQNW
	Vq+YLEo96OS/iDWp/fJrhAc4ZBDYBa6fFgyao/hsqymad8GNJlIaF3kvirVEg8YgYYew=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oiFDP-0001wO-3N@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:22:35 +0000

commit d24a10a91d46a56e1d406239643ec651a31033d4
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:07:42 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:07:42 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a8f5a19da9..d75dc2b9ed 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -766,6 +772,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -774,6 +783,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:22:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:22:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420252.664939 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDa-0001iA-6X; Tue, 11 Oct 2022 13:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420252.664939; Tue, 11 Oct 2022 13:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDa-0001i2-3m; Tue, 11 Oct 2022 13:22:46 +0000
Received: by outflank-mailman (input) for mailman id 420252;
 Tue, 11 Oct 2022 13:22:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDZ-0001hs-8x
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDZ-0002YM-8C
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDZ-0001x1-7F
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=kChYYmhs42xFNyYHJkaZPMOuMFuK7mWpbxZ9io4ZQI4=; b=wP/rJ0x8BopKlNONUh068SwtuU
	H0CFBV1VFfXNhmiPAvpEo8ytdamdMx8I4w+ORDKQtVsRPLKfCVXAWTRk5tdKcevQ0fLG7Jn3hHfPU
	OghgcngrOiAmnUzrEtm9w9scmzMNbUwlOmPeWxasI9cr7XL94Pp7SrxQel9FddGfSnP8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oiFDZ-0001x1-7F@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:22:45 +0000

commit 95f6d555ec84383f7daaf3374f65bec5ff4351f5
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:07:57 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:07:57 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/common.c |  1 +
 xen/arch/x86/mm/shadow/multi.c  | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index abe6d43343..0ab2ac6b7a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2583,6 +2583,7 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 9b43cb116c..7e0494cf7f 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3697,6 +3697,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
@@ -3757,6 +3762,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #else
 #error This should never happen
 #endif
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:22:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:22:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420253.664942 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDk-0001lE-80; Tue, 11 Oct 2022 13:22:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420253.664942; Tue, 11 Oct 2022 13:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDk-0001l7-5M; Tue, 11 Oct 2022 13:22:56 +0000
Received: by outflank-mailman (input) for mailman id 420253;
 Tue, 11 Oct 2022 13:22:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDj-0001l1-Cv
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDj-0002YY-C6
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDj-0001yN-Ax
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:22:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=n9EWahglMh6NWZlCxxqaeam4mS/+9HaOtXM2kNMTCk4=; b=noVXPZIgiHsX/i+CMZpE667zPZ
	/fVgO9j/LGtI/48Qc+757MiD9ecijBIEOKIVFiHDfwSLE/xQVyyN6SIaOW7oUoiEzlAL2ingUQWvM
	YWk63/35x5kR6/qliVFLY3iRhVRCtSgNObJnQDxPcC6oCH3l3x9jLukv/F8DJoHGxreQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oiFDj-0001yN-Ax@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:22:55 +0000

commit 1e26afa846fb9a00b9155280eeae3b8cb8375dd6
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:14 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:14 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 69 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/hvm.c     |  4 ++-
 xen/arch/x86/mm/shadow/multi.c   | 11 +++++--
 xen/arch/x86/mm/shadow/private.h |  3 +-
 4 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0ab2ac6b7a..fc4f7f78ce 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -927,14 +928,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -950,7 +952,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -973,7 +976,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -986,7 +989,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -994,9 +1002,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1215,7 +1233,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1233,16 +1251,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1333,7 +1353,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2401,12 +2423,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2470,6 +2493,9 @@ static void sh_update_paging_modes(struct vcpu *v)
             mfn_t mmfn = sh_make_monitor_table(
                              v, v->arch.paging.mode->shadow.shadow_levels);
 
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2508,6 +2534,12 @@ static void sh_update_paging_modes(struct vcpu *v)
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = sh_make_monitor_table(
                               v, v->arch.paging.mode->shadow.shadow_levels);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    sh_destroy_monitor_table(v, old_mfn,
+                                             old_mode->shadow.shadow_levels);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
@@ -2593,7 +2625,12 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index 87fc57704f..d68796c495 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -700,7 +700,9 @@ mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels)
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
+
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
     mfn_to_page(m4mfn)->shadow_flags = 4;
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 7e0494cf7f..6a9f82d39c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2825,9 +2825,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 911db46e73..3fe0388e7c 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -351,7 +351,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:23:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420254.664947 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDu-0001nv-9v; Tue, 11 Oct 2022 13:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420254.664947; Tue, 11 Oct 2022 13:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFDu-0001nn-77; Tue, 11 Oct 2022 13:23:06 +0000
Received: by outflank-mailman (input) for mailman id 420254;
 Tue, 11 Oct 2022 13:23:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDt-0001nb-Fx
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDt-0002Yy-FD
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFDt-0001z2-EN
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=HwQUkPuGiD5Uk5W6nrQ2EwH5ZXQ1Tc3PNPV9TRcLeA0=; b=4MxTb2whOhPqHtm5YggSNp+dE7
	3Utqy0nzmFFPXh1QSu5AZKoMaqTrSOLyrx2HO/Ofw3iTRkHIRkBe2A7wXalJF93wE/oL402MB+WP5
	z1xa5m1gvBXz8e9Fk3n8TaDEC13T4pET74dKcCk55pIGFXkl/LG6uEAGau3de4Pt5gYw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oiFDt-0001z2-EN@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:23:05 +0000

commit 4f9b535194f70582863f2a78f113547d8822b2b9
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:28 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:28 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d75dc2b9ed..787991233e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -245,6 +245,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -281,7 +284,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index fc4f7f78ce..9ad7e5a886 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -938,6 +938,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -990,7 +994,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1004,10 +1008,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1235,6 +1242,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:23:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:23:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420255.664951 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFE4-0001rX-Dc; Tue, 11 Oct 2022 13:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420255.664951; Tue, 11 Oct 2022 13:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFE4-0001rO-Ai; Tue, 11 Oct 2022 13:23:16 +0000
Received: by outflank-mailman (input) for mailman id 420255;
 Tue, 11 Oct 2022 13:23:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFE3-0001rE-Iz
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFE3-0002ZO-IL
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFE3-0001za-HS
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=VGvh4oof0yd4Z7eD4Trt+YEOw2J9+rXZYapnj1annx0=; b=StOBYhn7nsX7CoXBTgyFtkq5Am
	qBYW7RS1peCMYC9At8xzNOki2i3P6vYxqi3w+2PjrCM9Q+G98b6c6xrVHdvBREAnyjDnJRd95OR+c
	KZ5evo/C9u4sy0+aQKdw8q00PTPzYn8Rudy7Wv6/+HuM+tXe/962vRDXr2G4KVmCgOWY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oiFE3-0001za-HS@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:23:15 +0000

commit 7f055b011a657f8f16b0df242301efb312058eea
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:42 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:42 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 787991233e..aef2297450 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -265,6 +265,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 9ad7e5a886..366956c146 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1184,6 +1184,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1226,11 +1227,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1300,9 +1322,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:23:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:23:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420256.664955 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEF-0001ur-FB; Tue, 11 Oct 2022 13:23:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420256.664955; Tue, 11 Oct 2022 13:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEF-0001uj-CF; Tue, 11 Oct 2022 13:23:27 +0000
Received: by outflank-mailman (input) for mailman id 420256;
 Tue, 11 Oct 2022 13:23:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFED-0001uZ-O4
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFED-0002ZT-LP
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFED-00020I-Kj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=dejkPQgPi8NaiQq+QyocIp4Uk6qpzpug1NMJOjLzkOk=; b=SrE3DLR5fJHeJv93Vw47vQoip4
	TxgUOKXnRHxWmMWJ82CheB3FHrQ8/Ge3puXNOvdmP+gPbySq1ZqhPd3qptgM9P+sIDU46M8q6zx9L
	DgCjayraUlwEhmp9wPNgPzQP4aARF6YFawQv+hPJldEbQPHZuaxGVmL5U/9lM3PGC7/w=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oiFED-00020I-Kj@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:23:25 +0000

commit 686c920fa9389fe2b6b619643024ed98b4b7d51f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:58 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 42 +++++++++++++++++++++++++----------------
 xen/arch/x86/mm/shadow/common.c | 12 ++++++++++++
 3 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2838f976d7..ce6ddcf313 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2358,12 +2357,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index aef2297450..a44fcfd95e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -546,24 +547,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -578,6 +563,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -603,6 +590,7 @@ void hap_vcpu_teardown(struct vcpu *v)
 void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -611,6 +599,28 @@ void hap_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         hap_vcpu_teardown(v);
 
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d); /* Keep various asserts happy */
 
     if ( d->arch.paging.hap.total_pages != 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 366956c146..680766fd51 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2891,8 +2891,17 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d);
 
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -3013,6 +3022,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:23:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:23:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420258.664959 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEP-0001xa-Gc; Tue, 11 Oct 2022 13:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420258.664959; Tue, 11 Oct 2022 13:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEP-0001xS-Dm; Tue, 11 Oct 2022 13:23:37 +0000
Received: by outflank-mailman (input) for mailman id 420258;
 Tue, 11 Oct 2022 13:23:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEN-0001x1-PS
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEN-0002Zc-Of
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEN-00020z-Nr
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=WdQgt+v1+FVN2lpGk2/vt5EwAN7G0uBp+Chz9JnZoPQ=; b=v/jTcY1WnNJz0jZIZrOiPUiYga
	H+rTgF5Eme5NGcuinV6FhGzCnPD8SFRwo9q6RElYGxnxBVNmJ/L2EbyJYrrjSFO+GmwYSaI0hU+RJ
	svgPmqrppmj/wgned/qXwWsfWJJY5oEZBNzGmOxk3qlU2oMO5UzvGdRrn4KgWJX7Kl5c=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oiFEN-00020z-Nr@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:23:35 +0000

commit b03074bb47d10c9373688b3661c7c31da01c21a3
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:09:12 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:09:12 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a44fcfd95e..1f9a157a0c 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -548,17 +548,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -612,14 +612,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d); /* Keep various asserts happy */
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 8ba73082c1..107f6778a6 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -741,12 +741,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -765,8 +766,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 680766fd51..8f7fddcee1 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2837,8 +2837,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2891,7 +2895,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -3012,7 +3018,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 46eb51d44c..edbe4cee27 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -619,7 +619,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:23:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:23:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420259.664962 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEZ-00020Z-IN; Tue, 11 Oct 2022 13:23:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420259.664962; Tue, 11 Oct 2022 13:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEZ-00020Q-FM; Tue, 11 Oct 2022 13:23:47 +0000
Received: by outflank-mailman (input) for mailman id 420259;
 Tue, 11 Oct 2022 13:23:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEX-00020F-TR
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEX-0002Zm-Rt
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEX-00021p-R9
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=gfupHGfsVPCO5U7FBEvdUbzg3wJ1DjMp9foMXoLgkPs=; b=5CnP38/NmovOgF/mtZi3hE9mab
	kI+pChjpg60pVkuz/d55iP9cvEMlcq9zsYdCucwsBh7CX3fSfCXwHqczwYM8s+dyLW5sL2agLBS99
	oIF8WnRnPyQJ82fozAAmqpvl3l/QXiD35dipmatzhD9/qQmG0U7eT1GVgmGf8pfvsSkg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1oiFEX-00021p-R9@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:23:45 +0000

commit 0c0680d6e7953ca4c91699e60060c732f9ead5c1
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:09:32 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:09:32 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in       |  5 +++++
 tools/libs/light/libxl_arch.h  |  4 ++++
 tools/libs/light/libxl_arm.c   | 12 ++++++++++++
 tools/libs/light/libxl_utils.c |  9 ++-------
 tools/libs/light/libxl_x86.c   | 13 +++++++++++++
 5 files changed, 36 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 56370a37db..af7fae7c52 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1746,6 +1746,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map. Users should adjust this value if bigger P2M pool size is
+needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 8527fc5c6c..6741b7f6f4 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -90,6 +90,10 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index e2901f13b7..d59b464192 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -154,6 +154,18 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index 4699c4a0a3..e276c0ee9c 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 18c3c77ccd..4d66478fe9 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -882,6 +882,19 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                     libxl_defbool_val(src->b_info.arch_x86.msr_relaxed));
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
+
 /*
  * Local variables:
  * mode: C
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:23:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:23:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420260.664967 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEj-00023w-Kw; Tue, 11 Oct 2022 13:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420260.664967; Tue, 11 Oct 2022 13:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEj-00023o-IC; Tue, 11 Oct 2022 13:23:57 +0000
Received: by outflank-mailman (input) for mailman id 420260;
 Tue, 11 Oct 2022 13:23:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEh-00023X-Vf
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEh-0002Zq-Uw
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEh-00023n-U4
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:23:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=NpNXTJsxgTZvKR/vH3nNlLNItNdbIIRsxs5IDX0Sbpw=; b=aK7vBzgIcTd+Fl6lYQO0oqcX4g
	Uifa1BMoZf8vQ6FlodOgD8reijeoi+GhRsgmp1hPs8sXRQRPuu92m3ALR61qxYuCfbYiQtffvsy8L
	Rym3EYDYuWQF8JHLcaHtJPCvyBeyD1O3JaWLuWp/jvv9ZP/H8GiUl7w3RrXGZXeWFkK0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oiFEh-00023n-U4@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:23:55 +0000

commit 45336d8f88725aec65ee177b1b09abf6eef1dc8d
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:09:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:09:58 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4ad3e0606e..6883d86277 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1602,7 +1688,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index bb0a6adbe0..1d8935778f 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -40,6 +40,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -51,6 +59,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 3a2d51b35d..18675b2345 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -218,6 +218,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:24:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:24:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420261.664971 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEt-00026t-MC; Tue, 11 Oct 2022 13:24:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420261.664971; Tue, 11 Oct 2022 13:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFEt-00026k-Je; Tue, 11 Oct 2022 13:24:07 +0000
Received: by outflank-mailman (input) for mailman id 420261;
 Tue, 11 Oct 2022 13:24:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEs-00026W-2V
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEs-0002aG-1l
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:06 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFEs-00024Z-0v
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:06 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=iCfKBxNBhLTQfvlVSyikkaT5WpncrNxBthYsu3ZjJSA=; b=27OqfFWPpemoP+ALiUXLjfBfuo
	4lwDx6pgaqGyupk7GlKXzjjB4j8Hh+r1LLqcHEnDcwMEa6ZZ66rfxVZg1VizWDLjdIUigOEyHxaXn
	zmVbKG1yg7N6XnJiAGA9P0GDs9xKxD4GRPimj6JbHzwf3TLDjaBBttYDbc2Xvef4cDDU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oiFEs-00024Z-0v@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:24:06 +0000

commit c5215044578e88b401a1296ed6302df05c113c5f
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:10:16 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:10:16 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libs/light/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c        | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index d59b464192..d21f614ed7 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -131,6 +131,18 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               libxl__domain_build_state *state,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index a8c48b0bea..a049bc7f3e 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -45,11 +45,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:24:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:24:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420262.664975 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFF3-00029W-O0; Tue, 11 Oct 2022 13:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420262.664975; Tue, 11 Oct 2022 13:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFF3-00029O-L9; Tue, 11 Oct 2022 13:24:17 +0000
Received: by outflank-mailman (input) for mailman id 420262;
 Tue, 11 Oct 2022 13:24:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFF2-00028z-5h
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFF2-0002aj-4w
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:16 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFF2-000258-4D
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:16 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=cFq1XL3twllXRoiN1snVyNPpdHC/Qo6if/3fBJt7Zhs=; b=bLeUSDuFefEBIlRI8VNRoZiqei
	R3wYP/Uj16Y/KG1CFzAHDMQLdSkasHsEbuLnFuatElGgpCNEWHkhf5qMf0aCGtW86P1k/y5Wuwthc
	txCKgpD+yyuhgZCHUFctpv9YMGKmkJMJoHlAwz8AbIsbd2gUVYaUHwuQFleNLcGOnNaY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oiFF2-000258-4D@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:24:16 +0000

commit 7ad38a39f08aadc1578bdb46ccabaad79ed0faee
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:10:34 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:10:34 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..470c9491a7 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -164,6 +164,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 223ec9694d..a5ffd952ec 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -985,6 +985,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1044,6 +1045,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 26c1342948..df0ec84f03 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2333,6 +2333,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2424,6 +2439,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2433,6 +2450,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index a049bc7f3e..4ab5ed4ab2 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -48,6 +48,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -64,9 +67,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6883d86277..c1055ff2a7 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -751,7 +799,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -878,7 +926,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -902,7 +950,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1644,7 +1692,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1668,6 +1716,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:24:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:24:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420263.664979 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFD-0002Cj-Q8; Tue, 11 Oct 2022 13:24:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420263.664979; Tue, 11 Oct 2022 13:24:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFD-0002Cb-Mm; Tue, 11 Oct 2022 13:24:27 +0000
Received: by outflank-mailman (input) for mailman id 420263;
 Tue, 11 Oct 2022 13:24:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFC-0002CL-8b
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFC-0002ao-7s
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:26 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFC-00025Z-7A
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:26 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=cCZr9bCJrdZre85/GcY7OsDGa8MbAM9uKgD51SYMY8A=; b=K6XFKoQuX4VEjP0ONR+gHaNdG9
	BIMViJlF7sjdPe032xCKL8lzw6k2tP8LuAurrA20x+J0qu9D5YLxodf80OCOVDMCfUKAh69j1TRG0
	3q6eodC1a4Bt+UpJnS9omh+OEiL3Exf5Q4nximR1EoyPDymHaydCgMXhhGc/sCVK6/MA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oiFFC-00025Z-7A@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:24:26 +0000

commit bb43a10fefe494ab747b020fef3e823b63fc566d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:11:01 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:11:01 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 77bba98069..0523beb9b7 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2608,9 +2608,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2647,11 +2646,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
             reduce_status_for_pin(rd, act, status, readonly);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:24:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420265.664983 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFN-0002Fy-ST; Tue, 11 Oct 2022 13:24:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420265.664983; Tue, 11 Oct 2022 13:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFN-0002Fq-Pq; Tue, 11 Oct 2022 13:24:37 +0000
Received: by outflank-mailman (input) for mailman id 420265;
 Tue, 11 Oct 2022 13:24:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFM-0002Fh-Bl
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFM-0002ay-B3
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:36 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFM-000268-AM
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:36 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=cKoL5hDGADEfQXcGUEeWtOmaQJfUgiIf8Khg3f8Khrk=; b=DFTqnlHWZT/EfIQPJqfYW1VZOZ
	qaSx0GEicB2vF/KLJIkXY7LgwacRqx1DJGEb090vpkh1csQ+Ui47D8ISOmex2OXFVo945ED+a8VUj
	jpEslqc55OfDFKT8Wg3SRYihCIu85bMPFlln7Y5EFJ1+9dB+mUOryQhhcAGs00+GkF5E=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] tools/libxl: Replace deprecated -soundhw on QEMU command line
Message-Id: <E1oiFFM-000268-AM@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:24:36 +0000

commit d65ebacb78901b695bc5e8a075ad1ad865a78928
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Tue Oct 11 15:13:15 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:13:15 2022 +0200

    tools/libxl: Replace deprecated -soundhw on QEMU command line
    
    -soundhw is deprecated since 825ff02911c9 ("audio: add soundhw
    deprecation notice"), QEMU v5.1, and is been remove for upcoming v7.1
    by 039a68373c45 ("introduce -audio as a replacement for -soundhw").
    
    Instead we can just add the sound card with "-device", for most option
    that "-soundhw" could handle. "-device" is an option that existed
    before QEMU 1.0, and could already be used to add audio hardware.
    
    The list of possible option for libxl's "soundhw" is taken the list
    from QEMU 7.0.
    
    The list of options for "soundhw" are listed in order of preference in
    the manual. The first three (hda, ac97, es1370) are PCI devices and
    easy to test on Linux, and the last four are ISA devices which doesn't
    seems to work out of the box on linux.
    
    The sound card 'pcspk' isn't listed even if it used to be accepted by
    '-soundhw' because QEMU crash when trying to add it to a Xen domain.
    Also, it wouldn't work with "-device" might need to be "-machine
    pcspk-audiodev=default" instead.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    master commit: 62ca138c2c052187783aca3957d3f47c4dcfd683
    master date: 2022-08-18 09:25:50 +0200
---
 docs/man/xl.cfg.5.pod.in                  |  6 +++---
 tools/libs/light/libxl_dm.c               | 19 ++++++++++++++++++-
 tools/libs/light/libxl_types_internal.idl | 10 ++++++++++
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index af7fae7c52..ef9505f913 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2523,9 +2523,9 @@ The form serial=DEVICE is also accepted for backwards compatibility.
 
 =item B<soundhw="DEVICE">
 
-Select the virtual sound card to expose to the guest. The valid
-devices are defined by the device model configuration, please see the
-B<qemu(1)> manpage for details. The default is not to export any sound
+Select the virtual sound card to expose to the guest. The valid devices are
+B<hda>, B<ac97>, B<es1370>, B<adlib>, B<cs4231a>, B<gus>, B<sb16> if there are
+available with the device model QEMU. The default is not to export any sound
 device.
 
 =item B<vkb_device=BOOLEAN>
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index ae5f35e0c3..b86e8ccc85 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1204,6 +1204,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     uint64_t ram_size;
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
+    int rc;
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1531,7 +1532,23 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
         if (b_info->u.hvm.soundhw) {
-            flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, NULL);
+            libxl__qemu_soundhw soundhw;
+
+            rc = libxl__qemu_soundhw_from_string(b_info->u.hvm.soundhw, &soundhw);
+            if (rc) {
+                LOGD(ERROR, guest_domid, "Unknown soundhw option '%s'", b_info->u.hvm.soundhw);
+                return ERROR_INVAL;
+            }
+
+            switch (soundhw) {
+            case LIBXL__QEMU_SOUNDHW_HDA:
+                flexarray_vappend(dm_args, "-device", "intel-hda",
+                                  "-device", "hda-duplex", NULL);
+                break;
+            default:
+                flexarray_append_pair(dm_args, "-device",
+                                      (char*)libxl__qemu_soundhw_to_string(soundhw));
+            }
         }
         if (!libxl__acpi_defbool_val(b_info)) {
             flexarray_append(dm_args, "-no-acpi");
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21dbb..caa08d3229 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -55,3 +55,13 @@ libxl__device_action = Enumeration("device_action", [
     (1, "ADD"),
     (2, "REMOVE"),
     ])
+
+libxl__qemu_soundhw = Enumeration("qemu_soundhw", [
+    (1, "ac97"),
+    (2, "adlib"),
+    (3, "cs4231a"),
+    (4, "es1370"),
+    (5, "gus"),
+    (6, "hda"),
+    (7, "sb16"),
+    ])
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:24:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:24:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420266.664987 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFX-0002J0-Tv; Tue, 11 Oct 2022 13:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420266.664987; Tue, 11 Oct 2022 13:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFX-0002Is-RI; Tue, 11 Oct 2022 13:24:47 +0000
Received: by outflank-mailman (input) for mailman id 420266;
 Tue, 11 Oct 2022 13:24:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFW-0002Ie-Ev
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFW-0002b8-EC
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:46 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFW-00026f-DF
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:46 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=jAQNY7JvcK3o0KuaCgxjgwsE74YWYFJF1LG2hV8nWxo=; b=6kG2BqojKzqNeF5XKRLMH8WTLp
	eEtOgDsV8BbbnC9jfNX+JSTIfA0n5+kLzPdEZN10WLgDF03mxQaUJ4HDRa2x5Ahv1KFo6xI8+uRgO
	DXkXm1RsCL87btsBl9X/IK5/cms/vxdye/H9hWjZS3lKjL0WQBVrm9d7bQFxltt6mAjA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
Message-Id: <E1oiFFW-00026f-DF@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:24:46 +0000

commit 7923ea47e578bca30a6e45951a9da09e827ff028
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:14:05 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:14:05 2022 +0200

    x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
    
    While the SDM isn't very clear about this, our present behavior make
    Linux 5.19 unhappy. As of commit 8ad7e8f69695 ("x86/fpu/xsave: Support
    XSAVEC in the kernel") they're using this CPUID output also to size
    the compacted area used by XSAVEC. Getting back zero there isn't really
    liked, yet for PV that's the default on capable hardware: XSAVES isn't
    exposed to PV domains.
    
    Considering that the size reported is that of the compacted save area,
    I view Linux'es assumption as appropriate (short of the SDM properly
    considering the case). Therefore we need to populate the field also when
    only XSAVEC is supported for a guest.
    
    Fixes: 460b9a4b3630 ("x86/xsaves: enable xsaves/xrstors for hvm guest")
    Fixes: 8d050ed1097c ("x86: don't expose XSAVES capability to PV guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: c3bd0b83ea5b7c0da6542687436042eeea1e7909
    master date: 2022-08-24 14:23:59 +0200
---
 xen/arch/x86/cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index ee2c4ea03a..11c95178f1 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1052,7 +1052,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         switch ( subleaf )
         {
         case 1:
-            if ( p->xstate.xsaves )
+            if ( p->xstate.xsavec || p->xstate.xsaves )
             {
                 /*
                  * TODO: Figure out what to do for XSS state.  VT-x manages
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:24:58 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:24:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420267.664991 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFh-0002M1-VI; Tue, 11 Oct 2022 13:24:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420267.664991; Tue, 11 Oct 2022 13:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFh-0002Lt-Sg; Tue, 11 Oct 2022 13:24:57 +0000
Received: by outflank-mailman (input) for mailman id 420267;
 Tue, 11 Oct 2022 13:24:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFg-0002Lk-I6
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFg-0002bC-HK
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:56 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFg-00027O-GY
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:24:56 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Svi7ufLK/fHTz5UlFW53Z79rxrCN4/7x+MD89lWO1T4=; b=JYaPgiz6Abp/NnNJN+GXjZ9S0e
	BWqFFU33Q2RS/RAmbS51F/G4yAHY8voCaSEE5chYOXjUT4WLu63g8QnoH6q869qACYVkMhYHgGTbD
	2PpEZdyvwIVQcur3GGq82KP11gPldEsmk+QuhmIPY404IDDNSKKecjkLH86k7wxygR7A=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/sched: introduce cpupool_update_node_affinity()
Message-Id: <E1oiFFg-00027O-GY@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:24:56 +0000

commit 735b10844489babf52d3193193285a7311cf2c39
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:14:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:14:22 2022 +0200

    xen/sched: introduce cpupool_update_node_affinity()
    
    For updating the node affinities of all domains in a cpupool add a new
    function cpupool_update_node_affinity().
    
    In order to avoid multiple allocations of cpumasks carve out memory
    allocation and freeing from domain_update_node_affinity() into new
    helpers, which can be used by cpupool_update_node_affinity().
    
    Modify domain_update_node_affinity() to take an additional parameter
    for passing the allocated memory in and to allocate and free the memory
    via the new helpers in case NULL was passed.
    
    This will help later to pre-allocate the cpumasks in order to avoid
    allocations in stop-machine context.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a83fa1e2b96ace65b45dde6954d67012633a082b
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 54 +++++++++++++++++++++++++++++++---------------
 xen/common/sched/cpupool.c | 39 ++++++++++++++++++---------------
 xen/common/sched/private.h |  7 ++++++
 xen/include/xen/sched.h    |  9 +++++++-
 4 files changed, 74 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index f07bd2681f..065a83eca9 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1824,9 +1824,28 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
     return ret;
 }
 
-void domain_update_node_affinity(struct domain *d)
+bool alloc_affinity_masks(struct affinity_masks *affinity)
 {
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
+    if ( !alloc_cpumask_var(&affinity->hard) )
+        return false;
+    if ( !alloc_cpumask_var(&affinity->soft) )
+    {
+        free_cpumask_var(affinity->hard);
+        return false;
+    }
+
+    return true;
+}
+
+void free_affinity_masks(struct affinity_masks *affinity)
+{
+    free_cpumask_var(affinity->soft);
+    free_cpumask_var(affinity->hard);
+}
+
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity)
+{
+    struct affinity_masks masks;
     cpumask_t *dom_affinity;
     const cpumask_t *online;
     struct sched_unit *unit;
@@ -1836,14 +1855,16 @@ void domain_update_node_affinity(struct domain *d)
     if ( !d->vcpu || !d->vcpu[0] )
         return;
 
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
+    if ( !affinity )
     {
-        free_cpumask_var(dom_cpumask);
-        return;
+        affinity = &masks;
+        if ( !alloc_affinity_masks(affinity) )
+            return;
     }
 
+    cpumask_clear(affinity->hard);
+    cpumask_clear(affinity->soft);
+
     online = cpupool_domain_master_cpumask(d);
 
     spin_lock(&d->node_affinity_lock);
@@ -1864,22 +1885,21 @@ void domain_update_node_affinity(struct domain *d)
          */
         for_each_sched_unit ( d, unit )
         {
-            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-                       unit->cpu_soft_affinity);
+            cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affinity);
+            cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affinity);
         }
         /* Filter out non-online cpus */
-        cpumask_and(dom_cpumask, dom_cpumask, online);
-        ASSERT(!cpumask_empty(dom_cpumask));
+        cpumask_and(affinity->hard, affinity->hard, online);
+        ASSERT(!cpumask_empty(affinity->hard));
         /* And compute the intersection between hard, online and soft */
-        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
+        cpumask_and(affinity->soft, affinity->soft, affinity->hard);
 
         /*
          * If not empty, the intersection of hard, soft and online is the
          * narrowest set we want. If empty, we fall back to hard&online.
          */
-        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-                           dom_cpumask : dom_cpumask_soft;
+        dom_affinity = cpumask_empty(affinity->soft) ? affinity->hard
+                                                     : affinity->soft;
 
         nodes_clear(d->node_affinity);
         for_each_cpu ( cpu, dom_affinity )
@@ -1888,8 +1908,8 @@ void domain_update_node_affinity(struct domain *d)
 
     spin_unlock(&d->node_affinity_lock);
 
-    free_cpumask_var(dom_cpumask_soft);
-    free_cpumask_var(dom_cpumask);
+    if ( affinity == &masks )
+        free_affinity_masks(affinity);
 }
 
 typedef long ret_t;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 8c6e6eb9cc..45b6ff9956 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -401,6 +401,25 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
     return ret;
 }
 
+/* Update affinities of all domains in a cpupool. */
+static void cpupool_update_node_affinity(const struct cpupool *c)
+{
+    struct affinity_masks masks;
+    struct domain *d;
+
+    if ( !alloc_affinity_masks(&masks) )
+        return;
+
+    rcu_read_lock(&domlist_read_lock);
+
+    for_each_domain_in_cpupool(d, c)
+        domain_update_node_aff(d, &masks);
+
+    rcu_read_unlock(&domlist_read_lock);
+
+    free_affinity_masks(&masks);
+}
+
 /*
  * assign a specific cpu to a cpupool
  * cpupool_lock must be held
@@ -408,7 +427,6 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
-    struct domain *d;
     const cpumask_t *cpus;
 
     cpus = sched_get_opt_cpumask(c->gran, cpu);
@@ -433,12 +451,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    rcu_read_lock(&domlist_read_lock);
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return 0;
 }
@@ -447,18 +460,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
-    struct domain *d;
     int ret;
 
     if ( c != cpupool_cpu_moving )
         return -EADDRNOTAVAIL;
 
-    /*
-     * We need this for scanning the domain list, both in
-     * cpu_disable_scheduler(), and at the bottom of this function.
-     */
     rcu_read_lock(&domlist_read_lock);
     ret = cpu_disable_scheduler(cpu);
+    rcu_read_unlock(&domlist_read_lock);
 
     rcu_read_lock(&sched_res_rculock);
     cpus = get_sched_res(cpu)->cpus;
@@ -485,11 +494,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return ret;
 }
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 92d0d49610..6e036f8c80 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -593,6 +593,13 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step,
         cpumask_copy(mask, unit->cpu_hard_affinity);
 }
 
+struct affinity_masks {
+    cpumask_var_t hard;
+    cpumask_var_t soft;
+};
+
+bool alloc_affinity_masks(struct affinity_masks *affinity);
+void free_affinity_masks(struct affinity_masks *affinity);
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 701963f84c..4e25627d96 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -649,8 +649,15 @@ static inline void get_knownalive_domain(struct domain *d)
     ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
 }
 
+struct affinity_masks;
+
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
-void domain_update_node_affinity(struct domain *d);
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity);
+
+static inline void domain_update_node_affinity(struct domain *d)
+{
+    domain_update_node_aff(d, NULL);
+}
 
 /*
  * To be implemented by each architecture, sanity checking the configuration
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:25:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:25:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420269.664995 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFs-0002Oy-1V; Tue, 11 Oct 2022 13:25:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420269.664995; Tue, 11 Oct 2022 13:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFFr-0002Or-UE; Tue, 11 Oct 2022 13:25:07 +0000
Received: by outflank-mailman (input) for mailman id 420269;
 Tue, 11 Oct 2022 13:25:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFq-0002Oa-LA
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFq-0002bT-KT
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:06 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFFq-000284-Jj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:06 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=KQB1ym6ceDyN4gvVe65H+Tr1PSdUbXvipkzlDCP2nmE=; b=4AEC4bFOFb3EKoBnMQMBPi7H5j
	J0FboK3HLgoT5YRF0aooKnoct0KB9HSWRc9ne2jThMDlBf/N9tzLO1PYTps6DUczuPB1RnNFRj8vj
	bMp12rc6wZQElHCDJFK46Pfi/wF8Mp4NUvgNiH57tAXLFuX+Xji9FU11/izjfcXGqb9U=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
Message-Id: <E1oiFFq-000284-Jj@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:25:06 +0000

commit d638c2085f71f694344b34e70eb1b371c86b00f0
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:15:14 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:15:14 2022 +0200

    xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
    
    In order to prepare not allocating or freeing memory from
    schedule_cpu_rm(), move this functionality to dedicated functions.
    
    For now call those functions from schedule_cpu_rm().
    
    No change of behavior expected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d42be6f83480b3ada286dc18444331a816be88a3
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 143 +++++++++++++++++++++++++++------------------
 xen/common/sched/private.h |  11 ++++
 2 files changed, 98 insertions(+), 56 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 065a83eca9..2decb1161a 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3221,6 +3221,75 @@ out:
     return ret;
 }
 
+/*
+ * Allocate all memory needed for free_cpu_rm_data(), as allocations cannot
+ * be made in stop_machine() context.
+ *
+ * Between alloc_cpu_rm_data() and the real cpu removal action the relevant
+ * contents of struct sched_resource can't change, as the cpu in question is
+ * locked against any other movement to or from cpupools, and the data copied
+ * by alloc_cpu_rm_data() is modified only in case the cpu in question is
+ * being moved from or to a cpupool.
+ */
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+{
+    struct cpu_rm_data *data;
+    const struct sched_resource *sr;
+    unsigned int idx;
+
+    rcu_read_lock(&sched_res_rculock);
+
+    sr = get_sched_res(cpu);
+    data = xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);
+    if ( !data )
+        goto out;
+
+    data->old_ops = sr->scheduler;
+    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
+    data->ppriv_old = sr->sched_priv;
+
+    for ( idx = 0; idx < sr->granularity - 1; idx++ )
+    {
+        data->sr[idx] = sched_alloc_res();
+        if ( data->sr[idx] )
+        {
+            data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
+            if ( !data->sr[idx]->sched_unit_idle )
+            {
+                sched_res_free(&data->sr[idx]->rcu);
+                data->sr[idx] = NULL;
+            }
+        }
+        if ( !data->sr[idx] )
+        {
+            while ( idx > 0 )
+                sched_res_free(&data->sr[--idx]->rcu);
+            XFREE(data);
+            goto out;
+        }
+
+        data->sr[idx]->curr = data->sr[idx]->sched_unit_idle;
+        data->sr[idx]->scheduler = &sched_idle_ops;
+        data->sr[idx]->granularity = 1;
+
+        /* We want the lock not to change when replacing the resource. */
+        data->sr[idx]->schedule_lock = sr->schedule_lock;
+    }
+
+ out:
+    rcu_read_unlock(&sched_res_rculock);
+
+    return data;
+}
+
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
+{
+    sched_free_udata(mem->old_ops, mem->vpriv_old);
+    sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+
+    xfree(mem);
+}
+
 /*
  * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops
  * (the idle scheduler).
@@ -3229,53 +3298,23 @@ out:
  */
 int schedule_cpu_rm(unsigned int cpu)
 {
-    void *ppriv_old, *vpriv_old;
-    struct sched_resource *sr, **sr_new = NULL;
+    struct sched_resource *sr;
+    struct cpu_rm_data *data;
     struct sched_unit *unit;
-    struct scheduler *old_ops;
     spinlock_t *old_lock;
     unsigned long flags;
-    int idx, ret = -ENOMEM;
+    int idx = 0;
     unsigned int cpu_iter;
 
+    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        return -ENOMEM;
+
     rcu_read_lock(&sched_res_rculock);
 
     sr = get_sched_res(cpu);
-    old_ops = sr->scheduler;
-
-    if ( sr->granularity > 1 )
-    {
-        sr_new = xmalloc_array(struct sched_resource *, sr->granularity - 1);
-        if ( !sr_new )
-            goto out;
-        for ( idx = 0; idx < sr->granularity - 1; idx++ )
-        {
-            sr_new[idx] = sched_alloc_res();
-            if ( sr_new[idx] )
-            {
-                sr_new[idx]->sched_unit_idle = sched_alloc_unit_mem();
-                if ( !sr_new[idx]->sched_unit_idle )
-                {
-                    sched_res_free(&sr_new[idx]->rcu);
-                    sr_new[idx] = NULL;
-                }
-            }
-            if ( !sr_new[idx] )
-            {
-                for ( idx--; idx >= 0; idx-- )
-                    sched_res_free(&sr_new[idx]->rcu);
-                goto out;
-            }
-            sr_new[idx]->curr = sr_new[idx]->sched_unit_idle;
-            sr_new[idx]->scheduler = &sched_idle_ops;
-            sr_new[idx]->granularity = 1;
 
-            /* We want the lock not to change when replacing the resource. */
-            sr_new[idx]->schedule_lock = sr->schedule_lock;
-        }
-    }
-
-    ret = 0;
+    ASSERT(sr->granularity);
     ASSERT(sr->cpupool != NULL);
     ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus));
     ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid));
@@ -3283,10 +3322,6 @@ int schedule_cpu_rm(unsigned int cpu)
     /* See comment in schedule_cpu_add() regarding lock switching. */
     old_lock = pcpu_schedule_lock_irqsave(cpu, &flags);
 
-    vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
-    ppriv_old = sr->sched_priv;
-
-    idx = 0;
     for_each_cpu ( cpu_iter, sr->cpus )
     {
         per_cpu(sched_res_idx, cpu_iter) = 0;
@@ -3300,27 +3335,27 @@ int schedule_cpu_rm(unsigned int cpu)
         else
         {
             /* Initialize unit. */
-            unit = sr_new[idx]->sched_unit_idle;
-            unit->res = sr_new[idx];
+            unit = data->sr[idx]->sched_unit_idle;
+            unit->res = data->sr[idx];
             unit->is_running = true;
             sched_unit_add_vcpu(unit, idle_vcpu[cpu_iter]);
             sched_domain_insert_unit(unit, idle_vcpu[cpu_iter]->domain);
 
             /* Adjust cpu masks of resources (old and new). */
             cpumask_clear_cpu(cpu_iter, sr->cpus);
-            cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus);
+            cpumask_set_cpu(cpu_iter, data->sr[idx]->cpus);
             cpumask_set_cpu(cpu_iter, &sched_res_mask);
 
             /* Init timer. */
-            init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
+            init_timer(&data->sr[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
 
             /* Last resource initializations and insert resource pointer. */
-            sr_new[idx]->master_cpu = cpu_iter;
-            set_sched_res(cpu_iter, sr_new[idx]);
+            data->sr[idx]->master_cpu = cpu_iter;
+            set_sched_res(cpu_iter, data->sr[idx]);
 
             /* Last action: set the new lock pointer. */
             smp_mb();
-            sr_new[idx]->schedule_lock = &sched_free_cpu_lock;
+            data->sr[idx]->schedule_lock = &sched_free_cpu_lock;
 
             idx++;
         }
@@ -3336,16 +3371,12 @@ int schedule_cpu_rm(unsigned int cpu)
     /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */
     spin_unlock_irqrestore(old_lock, flags);
 
-    sched_deinit_pdata(old_ops, ppriv_old, cpu);
-
-    sched_free_udata(old_ops, vpriv_old);
-    sched_free_pdata(old_ops, ppriv_old, cpu);
+    sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
-out:
     rcu_read_unlock(&sched_res_rculock);
-    xfree(sr_new);
+    free_cpu_rm_data(data, cpu);
 
-    return ret;
+    return 0;
 }
 
 struct scheduler *scheduler_get_default(void)
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 6e036f8c80..ff31854252 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -600,6 +600,15 @@ struct affinity_masks {
 
 bool alloc_affinity_masks(struct affinity_masks *affinity);
 void free_affinity_masks(struct affinity_masks *affinity);
+
+/* Memory allocation related data for schedule_cpu_rm(). */
+struct cpu_rm_data {
+    const struct scheduler *old_ops;
+    void *ppriv_old;
+    void *vpriv_old;
+    struct sched_resource *sr[];
+};
+
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
@@ -608,6 +617,8 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
 int schedule_cpu_rm(unsigned int cpu);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:25:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:25:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420270.664999 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFG1-0002ST-4t; Tue, 11 Oct 2022 13:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420270.664999; Tue, 11 Oct 2022 13:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFG1-0002SJ-1f; Tue, 11 Oct 2022 13:25:17 +0000
Received: by outflank-mailman (input) for mailman id 420270;
 Tue, 11 Oct 2022 13:25:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFG0-0002SA-O6
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFG0-0002bo-NR
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:16 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFG0-00028k-Mk
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:16 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=uU+OiGpnnMEf2sLrqMhsNjQ9RBitebzgDDuOMw/XWPA=; b=yYkJPuJY51iTcGAMmNNVODq31T
	3+AR3w0WzAwQGKgKRlaEOeDjLW4k0jihhHYqckjv0khbuRblN5lrIT1mSBha2Cwoi6AN0d+WmGu9w
	6jpx8+1kv3rfY+CU5hnsJtbANwJbN3JIFsbR/86VdnZzfiawSfEawBcBGBuiYQQOiJfk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/sched: fix cpu hotplug
Message-Id: <E1oiFG0-00028k-Mk@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:25:16 +0000

commit d17680808b4c8015e31070c971e1ee548170ae34
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:15:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:15:41 2022 +0200

    xen/sched: fix cpu hotplug
    
    Cpu unplugging is calling schedule_cpu_rm() via stop_machine_run() with
    interrupts disabled, thus any memory allocation or freeing must be
    avoided.
    
    Since commit 5047cd1d5dea ("xen/common: Use enhanced
    ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced
    via an assertion, which will now fail.
    
    Fix this by allocating needed memory before entering stop_machine_run()
    and freeing any memory only after having finished stop_machine_run().
    
    Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_cpu_[add/rm]()")
    Reported-by: Gao Ruifeng <ruifeng.gao@intel.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d84473689611eed32fd90b27e614f28af767fa3f
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 25 +++++++++++++----
 xen/common/sched/cpupool.c | 69 ++++++++++++++++++++++++++++++++++++----------
 xen/common/sched/private.h |  5 ++--
 3 files changed, 77 insertions(+), 22 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 2decb1161a..900aab8f66 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3231,7 +3231,7 @@ out:
  * by alloc_cpu_rm_data() is modified only in case the cpu in question is
  * being moved from or to a cpupool.
  */
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc)
 {
     struct cpu_rm_data *data;
     const struct sched_resource *sr;
@@ -3244,6 +3244,17 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
     if ( !data )
         goto out;
 
+    if ( aff_alloc )
+    {
+        if ( !alloc_affinity_masks(&data->affinity) )
+        {
+            XFREE(data);
+            goto out;
+        }
+    }
+    else
+        memset(&data->affinity, 0, sizeof(data->affinity));
+
     data->old_ops = sr->scheduler;
     data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
     data->ppriv_old = sr->sched_priv;
@@ -3264,6 +3275,7 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
         {
             while ( idx > 0 )
                 sched_res_free(&data->sr[--idx]->rcu);
+            free_affinity_masks(&data->affinity);
             XFREE(data);
             goto out;
         }
@@ -3286,6 +3298,7 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
 {
     sched_free_udata(mem->old_ops, mem->vpriv_old);
     sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+    free_affinity_masks(&mem->affinity);
 
     xfree(mem);
 }
@@ -3296,17 +3309,18 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
  * The cpu is already marked as "free" and not valid any longer for its
  * cpupool.
  */
-int schedule_cpu_rm(unsigned int cpu)
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data)
 {
     struct sched_resource *sr;
-    struct cpu_rm_data *data;
     struct sched_unit *unit;
     spinlock_t *old_lock;
     unsigned long flags;
     int idx = 0;
     unsigned int cpu_iter;
+    bool free_data = !data;
 
-    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        data = alloc_cpu_rm_data(cpu, false);
     if ( !data )
         return -ENOMEM;
 
@@ -3374,7 +3388,8 @@ int schedule_cpu_rm(unsigned int cpu)
     sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-    free_cpu_rm_data(data, cpu);
+    if ( free_data )
+        free_cpu_rm_data(data, cpu);
 
     return 0;
 }
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 45b6ff9956..b5a948639a 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -402,22 +402,28 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 }
 
 /* Update affinities of all domains in a cpupool. */
-static void cpupool_update_node_affinity(const struct cpupool *c)
+static void cpupool_update_node_affinity(const struct cpupool *c,
+                                         struct affinity_masks *masks)
 {
-    struct affinity_masks masks;
+    struct affinity_masks local_masks;
     struct domain *d;
 
-    if ( !alloc_affinity_masks(&masks) )
-        return;
+    if ( !masks )
+    {
+        if ( !alloc_affinity_masks(&local_masks) )
+            return;
+        masks = &local_masks;
+    }
 
     rcu_read_lock(&domlist_read_lock);
 
     for_each_domain_in_cpupool(d, c)
-        domain_update_node_aff(d, &masks);
+        domain_update_node_aff(d, masks);
 
     rcu_read_unlock(&domlist_read_lock);
 
-    free_affinity_masks(&masks);
+    if ( masks == &local_masks )
+        free_affinity_masks(masks);
 }
 
 /*
@@ -451,15 +457,17 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, NULL);
 
     return 0;
 }
 
-static int cpupool_unassign_cpu_finish(struct cpupool *c)
+static int cpupool_unassign_cpu_finish(struct cpupool *c,
+                                       struct cpu_rm_data *mem)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
+    struct affinity_masks *masks = mem ? &mem->affinity : NULL;
     int ret;
 
     if ( c != cpupool_cpu_moving )
@@ -482,7 +490,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
      */
     if ( !ret )
     {
-        ret = schedule_cpu_rm(cpu);
+        ret = schedule_cpu_rm(cpu, mem);
         if ( ret )
             cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus);
         else
@@ -494,7 +502,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, masks);
 
     return ret;
 }
@@ -558,7 +566,7 @@ static long cpupool_unassign_cpu_helper(void *info)
                       cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu);
     spin_lock(&cpupool_lock);
 
-    ret = cpupool_unassign_cpu_finish(c);
+    ret = cpupool_unassign_cpu_finish(c, NULL);
 
     spin_unlock(&cpupool_lock);
     debugtrace_printk("cpupool_unassign_cpu ret=%ld\n", ret);
@@ -701,7 +709,7 @@ static int cpupool_cpu_add(unsigned int cpu)
  * This function is called in stop_machine context, so we can be sure no
  * non-idle vcpu is active on the system.
  */
-static void cpupool_cpu_remove(unsigned int cpu)
+static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem)
 {
     int ret;
 
@@ -709,7 +717,7 @@ static void cpupool_cpu_remove(unsigned int cpu)
 
     if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
     {
-        ret = cpupool_unassign_cpu_finish(cpupool0);
+        ret = cpupool_unassign_cpu_finish(cpupool0, mem);
         BUG_ON(ret);
     }
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
@@ -775,7 +783,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu)
         {
             ret = cpupool_unassign_cpu_start(c, master_cpu);
             BUG_ON(ret);
-            ret = cpupool_unassign_cpu_finish(c);
+            ret = cpupool_unassign_cpu_finish(c, NULL);
             BUG_ON(ret);
         }
     }
@@ -993,12 +1001,24 @@ void dump_runq(unsigned char key)
 static int cpu_callback(
     struct notifier_block *nfb, unsigned long action, void *hcpu)
 {
+    static struct cpu_rm_data *mem;
+
     unsigned int cpu = (unsigned long)hcpu;
     int rc = 0;
 
     switch ( action )
     {
     case CPU_DOWN_FAILED:
+        if ( system_state <= SYS_STATE_active )
+        {
+            if ( mem )
+            {
+                free_cpu_rm_data(mem, cpu);
+                mem = NULL;
+            }
+            rc = cpupool_cpu_add(cpu);
+        }
+        break;
     case CPU_ONLINE:
         if ( system_state <= SYS_STATE_active )
             rc = cpupool_cpu_add(cpu);
@@ -1006,12 +1026,31 @@ static int cpu_callback(
     case CPU_DOWN_PREPARE:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
+        {
             rc = cpupool_cpu_remove_prologue(cpu);
+            if ( !rc )
+            {
+                ASSERT(!mem);
+                mem = alloc_cpu_rm_data(cpu, true);
+                rc = mem ? 0 : -ENOMEM;
+            }
+        }
         break;
     case CPU_DYING:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
-            cpupool_cpu_remove(cpu);
+        {
+            ASSERT(mem);
+            cpupool_cpu_remove(cpu, mem);
+        }
+        break;
+    case CPU_DEAD:
+        if ( system_state <= SYS_STATE_active )
+        {
+            ASSERT(mem);
+            free_cpu_rm_data(mem, cpu);
+            mem = NULL;
+        }
         break;
     case CPU_RESUME_FAILED:
         cpupool_cpu_remove_forced(cpu);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index ff31854252..3bab78ccb2 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -603,6 +603,7 @@ void free_affinity_masks(struct affinity_masks *affinity);
 
 /* Memory allocation related data for schedule_cpu_rm(). */
 struct cpu_rm_data {
+    struct affinity_masks affinity;
     const struct scheduler *old_ops;
     void *ppriv_old;
     void *vpriv_old;
@@ -617,9 +618,9 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc);
 void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
-int schedule_cpu_rm(unsigned int cpu);
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
 void cpupool_put(struct cpupool *pool);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:25:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420271.665005 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGB-0002Vg-6r; Tue, 11 Oct 2022 13:25:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420271.665005; Tue, 11 Oct 2022 13:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGB-0002VX-3V; Tue, 11 Oct 2022 13:25:27 +0000
Received: by outflank-mailman (input) for mailman id 420271;
 Tue, 11 Oct 2022 13:25:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGA-0002VP-R5
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGA-0002bz-QN
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:26 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGA-00029J-PO
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:26 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=AXCDcZslQd1Q12EE6vKQuk3RGmFFxw6E2/05D7mtjdI=; b=F5+Hl35FHqYL4Drmy+BOc4mYjS
	qoHP2FGUHEHhOwvXfHEg6KpnOmfjMMPTd8vRpUpv7/UVq6SLH8IR5kMXi3rJIzleih5VK5+NeEGXF
	fzhGCDaA/XXydDhOgVxCLdq20lb6H3hgYemB4RwA6svo1Qm2paiMwPbRrfDtMFwMXAmM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
Message-Id: <E1oiFGA-00029J-PO@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:25:26 +0000

commit 19cf28b515f21da02df80e68f901ad7650daaa37
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:15:55 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:15:55 2022 +0200

    Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
    
    I haven't been able to find evidence of "-nopie" ever having been a
    supported compiler option. The correct spelling is "-no-pie".
    Furthermore like "-pie" this is an option which is solely passed to the
    linker. The compiler only recognizes "-fpie" / "-fPIE" / "-fno-pie", and
    it doesn't infer these options from "-pie" / "-no-pie".
    
    Add the compiler recognized form, but for the possible case of the
    variable also being used somewhere for linking keep the linker option as
    well (with corrected spelling).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    
    Build: Drop -no-pie from EMBEDDED_EXTRA_CFLAGS
    
    This breaks all Clang builds, as demostrated by Gitlab CI.
    
    Contrary to the description in ecd6b9759919, -no-pie is not even an option
    passed to the linker.  GCC's actual behaviour is to inhibit the passing of
    -pie to the linker, as well as selecting different cr0 artefacts to be linked.
    
    EMBEDDED_EXTRA_CFLAGS is not used for $(CC)-doing-linking, and not liable to
    gain such a usecase.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>
    Fixes: ecd6b9759919 ("Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS")
    master commit: ecd6b9759919fa6335b0be1b5fc5cce29a30c4f1
    master date: 2022-09-08 09:25:26 +0200
    master commit: 13a7c0074ac8fb31f6c0485429b7a20a1946cb22
    master date: 2022-09-27 15:40:42 -0700
---
 Config.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 96d89b2f7d..9f87608f66 100644
--- a/Config.mk
+++ b/Config.mk
@@ -203,7 +203,7 @@ endif
 APPEND_LDFLAGS += $(foreach i, $(APPEND_LIB), -L$(i))
 APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
 
-EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
+EMBEDDED_EXTRA_CFLAGS := -fno-pie -fno-stack-protector -fno-stack-protector-all
 EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
 
 XEN_EXTFILES_URL ?= http://xenbits.xen.org/xen-extfiles
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:25:38 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:25:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420272.665007 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGM-0002Yd-7f; Tue, 11 Oct 2022 13:25:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420272.665007; Tue, 11 Oct 2022 13:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGM-0002YW-52; Tue, 11 Oct 2022 13:25:38 +0000
Received: by outflank-mailman (input) for mailman id 420272;
 Tue, 11 Oct 2022 13:25:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGK-0002YO-U3
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGK-0002c3-TO
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:36 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGK-00029x-SQ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:36 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=G2BTSD5kiSmAl++qGWhp21n8IoGWID4pxvxY6LkGiQc=; b=NXQuXOrod+XS6WqNb4LC0l7zEv
	VK8U5isVXUGAZWn03bvC5VxTupZmBm9utJKIwzQxfQ8i/AhCOcGNhUJCcBO7Kf/gh2/aUmu30abWf
	vcnLqvjnI4AGTcba2ScpUVFN3fYBH0iwSK7sx7tlYRum6tiJjEJEdXW0y+istV/MVJdg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] tools/xenstore: minor fix of the migration stream doc
Message-Id: <E1oiFGK-00029x-SQ@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:25:36 +0000

commit 182f8bb503b9dd3db5dd9118dc763d241787c6fc
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:16:09 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:16:09 2022 +0200

    tools/xenstore: minor fix of the migration stream doc
    
    Drop mentioning the non-existent read-only socket in the migration
    stream description document.
    
    The related record field was removed in commit 8868a0e3f674 ("docs:
    update the xenstore migration stream documentation).
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: ace1d2eff80d3d66c37ae765dae3e3cb5697e5a4
    master date: 2022-09-08 09:25:58 +0200
---
 docs/designs/xenstore-migration.md | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 5f1155273e..78530bbb0e 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -129,11 +129,9 @@ xenstored state that needs to be restored.
 | `evtchn-fd`    | The file descriptor used to communicate with |
 |                | the event channel driver                     |
 
-xenstored will resume in the original process context. Hence `rw-socket-fd` and
-`ro-socket-fd` simply specify the file descriptors of the sockets. Sockets
-are not always used, however, and so -1 will be used to denote an unused
-socket.
-
+xenstored will resume in the original process context. Hence `rw-socket-fd`
+simply specifies the file descriptor of the socket. Sockets are not always
+used, however, and so -1 will be used to denote an unused socket.
 
 \pagebreak
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:25:48 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420273.665010 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGW-0002bA-97; Tue, 11 Oct 2022 13:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420273.665010; Tue, 11 Oct 2022 13:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGW-0002b2-6Y; Tue, 11 Oct 2022 13:25:48 +0000
Received: by outflank-mailman (input) for mailman id 420273;
 Tue, 11 Oct 2022 13:25:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGV-0002av-13
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGV-0002c9-0K
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:47 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGU-0002AQ-Vf
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:46 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=UPaf0OwNQGsnnXelqa9/v0OlpQdv0LBhIb5NvJsM7sI=; b=NHWb0vZrJUgkDMElyhgRgmxqeh
	dOoQo4NZJ988keqv/+r4ERBCZtw5SmZfU7FbdKbtNxVs/jg0ZLXF0HwVw2eRk0jTWUfz8G+3LvOoz
	Soop8wUoueMZM9Ib3Mz0yGbsLqMBfTSAbCz6zWBlkc5CWZoWnVzvRZqOi803pmCGWs8Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/gnttab: fix gnttab_acquire_resource()
Message-Id: <E1oiFGU-0002AQ-Vf@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:25:46 +0000

commit 3ac64b3751837a117ee3dfb3e2cc27057a83d0f7
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:16:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:16:53 2022 +0200

    xen/gnttab: fix gnttab_acquire_resource()
    
    Commit 9dc46386d89d ("gnttab: work around "may be used uninitialized"
    warning") was wrong, as vaddrs can legitimately be NULL in case
    XENMEM_resource_grant_table_id_status was specified for a grant table
    v1. This would result in crashes in debug builds due to
    ASSERT_UNREACHABLE() triggering.
    
    Check vaddrs only to be NULL in the rc == 0 case.
    
    Expand the tests in tools/tests/resource to tickle this path, and verify that
    using XENMEM_resource_grant_table_id_status on a v1 grant table fails.
    
    Fixes: 9dc46386d89d ("gnttab: work around "may be used uninitialized" warning")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com> # xen
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 52daa6a8483e4fbd6757c9d1b791e23931791608
    master date: 2022-09-09 16:28:38 +0100
---
 tools/tests/resource/test-resource.c | 15 +++++++++++++++
 xen/common/grant_table.c             |  2 +-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
index 1caaa60e62..bf485baff2 100644
--- a/tools/tests/resource/test-resource.c
+++ b/tools/tests/resource/test-resource.c
@@ -63,6 +63,21 @@ static void test_gnttab(uint32_t domid, unsigned int nr_frames)
     rc = xenforeignmemory_unmap_resource(fh, res);
     if ( rc )
         return fail("    Fail: Unmap %d - %s\n", errno, strerror(errno));
+
+    /*
+     * Verify that an attempt to map the status frames fails, as the domain is
+     * in gnttab v1 mode.
+     */
+    res = xenforeignmemory_map_resource(
+        fh, domid, XENMEM_resource_grant_table,
+        XENMEM_resource_grant_table_id_status, 0, 1,
+        (void **)&gnttab, PROT_READ | PROT_WRITE, 0);
+
+    if ( res )
+    {
+        fail("    Fail: Managed to map gnttab v2 status frames in v1 mode\n");
+        xenforeignmemory_unmap_resource(fh, res);
+    }
 }
 
 static void test_domain_configurations(void)
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 0523beb9b7..01e426c67f 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -4138,7 +4138,7 @@ int gnttab_acquire_resource(
      * on non-error paths, and hence it needs setting to NULL at the top of the
      * function.  Leave some runtime safety.
      */
-    if ( !vaddrs )
+    if ( !rc && !vaddrs )
     {
         ASSERT_UNREACHABLE();
         rc = -ENODATA;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:25:58 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:25:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420274.665015 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGg-0002dx-Am; Tue, 11 Oct 2022 13:25:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420274.665015; Tue, 11 Oct 2022 13:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGg-0002dp-83; Tue, 11 Oct 2022 13:25:58 +0000
Received: by outflank-mailman (input) for mailman id 420274;
 Tue, 11 Oct 2022 13:25:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGf-0002dh-3d
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGf-0002cD-30
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:57 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGf-0002B1-2N
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:25:57 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rhK4o4W2QZEfV/eTyrKRlh7h/Sn3EBRnVezzy+z1mjE=; b=Q6X85SdOAqpwyRPa/zK0vQr9VD
	/DVM5uSi3Y31UkKYc7q735t7GoRNc8S1kV07iT/2OTToBd4cmesluS4pfdxr8uZ4qRWhGqcvPTebV
	8SXY4sRy2/lhhB8gMld+ztB0n9QuvIS6Ja1DwjCn5b9TdJuzj7DnhLdsjpdQSlB8W7Jo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
Message-Id: <E1oiFGf-0002B1-2N@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:25:57 +0000

commit 62e534d17cdd838828bfd75d3d845e31524dd336
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:17:12 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:17:12 2022 +0200

    x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
    
    Forever sinced its introduction VCPUOP_register_vcpu_time_memory_area
    was available only to native domains. Linux, for example, would attempt
    to use it irrespective of guest bitness (including in its so called
    PVHVM mode) as long as it finds XEN_PVCLOCK_TSC_STABLE_BIT set (which we
    set only for clocksource=tsc, which in turn needs engaging via command
    line option).
    
    Fixes: a5d39947cb89 ("Allow guests to register secondary vcpu_time_info")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: b726541d94bd0a80b5864d17a2cd2e6d73a3fe0a
    master date: 2022-09-29 14:47:45 +0200
---
 xen/arch/x86/x86_64/domain.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index c46dccc25a..d51d993447 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -54,6 +54,26 @@ arch_compat_vcpu_op(
         break;
     }
 
+    case VCPUOP_register_vcpu_time_memory_area:
+    {
+        struct compat_vcpu_register_time_memory_area area = { .addr.p = 0 };
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.h, arg, 1) )
+            break;
+
+        if ( area.addr.h.c != area.addr.p ||
+             !compat_handle_okay(area.addr.h, 1) )
+            break;
+
+        rc = 0;
+        guest_from_compat_handle(v->arch.time_info_guest, area.addr.h);
+
+        force_update_vcpu_system_time(v);
+
+        break;
+    }
+
     case VCPUOP_get_physid:
         rc = arch_do_vcpu_op(cmd, v, arg);
         break;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:26:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:26:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420275.665019 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGq-0002gf-D3; Tue, 11 Oct 2022 13:26:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420275.665019; Tue, 11 Oct 2022 13:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFGq-0002gX-9e; Tue, 11 Oct 2022 13:26:08 +0000
Received: by outflank-mailman (input) for mailman id 420275;
 Tue, 11 Oct 2022 13:26:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGp-0002gQ-6X
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:26:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGp-0002cb-5t
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:26:07 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFGp-0002Bh-58
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:26:07 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=eQD+NFfjKA4UdRGovLuOPSyR2caznds5JgqHzR8ZEw0=; b=GIaj2BaKwBN4KcvOXHEPjsmEE6
	Ro0S3GVPFUCJ9mQi37DRjfoF9eZddVlpbZ6kiNvNty0HdXpFiQXKSc5Nmu5L7QV0q9SB22371BNVs
	UFSHIo/vxyfXiwYW5srXav4q8Axk+eCepuJvOKtN11L9GUyKqPf9nMuSWwK2jrw/HM4E=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/vpmu: Fix race-condition in vpmu_load
Message-Id: <E1oiFGp-0002Bh-58@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:26:07 +0000

commit 9690bb261d5fa09cb281e1fa124d93db7b84fda5
Author:     Tamas K Lengyel <tamas.lengyel@intel.com>
AuthorDate: Tue Oct 11 15:17:42 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:17:42 2022 +0200

    x86/vpmu: Fix race-condition in vpmu_load
    
    The vPMU code-bases attempts to perform an optimization on saving/reloading the
    PMU context by keeping track of what vCPU ran on each pCPU. When a pCPU is
    getting scheduled, checks if the previous vCPU isn't the current one. If so,
    attempts a call to vpmu_save_force. Unfortunately if the previous vCPU is
    already getting scheduled to run on another pCPU its state will be already
    runnable, which results in an ASSERT failure.
    
    Fix this by always performing a pmu context save in vpmu_save when called from
    vpmu_switch_from, and do a vpmu_load when called from vpmu_switch_to.
    
    While this presents a minimal overhead in case the same vCPU is getting
    rescheduled on the same pCPU, the ASSERT failure is avoided and the code is a
    lot easier to reason about.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: defa4e51d20a143bdd4395a075bf0933bb38a9a4
    master date: 2022-09-30 09:53:49 +0200
---
 xen/arch/x86/cpu/vpmu.c | 42 ++++--------------------------------------
 1 file changed, 4 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index fb1b296a6c..800eff87dc 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -364,58 +364,24 @@ void vpmu_save(struct vcpu *v)
     vpmu->last_pcpu = pcpu;
     per_cpu(last_vcpu, pcpu) = v;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v, 0) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
     apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
 }
 
 int vpmu_load(struct vcpu *v, bool_t from_guest)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return 0;
 
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
          (!has_vlapic(vpmu_vcpu(vpmu)->domain) &&
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:44:09 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420294.665056 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYE-0006mn-Hg; Tue, 11 Oct 2022 13:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420294.665056; Tue, 11 Oct 2022 13:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYE-0006mf-Eq; Tue, 11 Oct 2022 13:44:06 +0000
Received: by outflank-mailman (input) for mailman id 420294;
 Tue, 11 Oct 2022 13:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYC-0006mZ-NK
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYC-0002vy-IW
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYC-0003Kk-HT
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=xamUPkQMeGXNkly1ad6+q4JwskTet39OfQtCoc3iync=; b=0XO/K4GFqqFOFYWwyJj18AGIv1
	MjEKstKkNMY7kp+BJg0Awqjulsky5TWoyGBxmxmy9o3KSyv4ib8HpHrUJ7F5AFbHgQgcOvF1wRarU
	27hH7XPpsXlOQwJPlTaM8K8eQnLGliG0fjD12pGZD+zThzVHumQPaQ/321Tn5v86bvaI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oiFYC-0003Kk-HT@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:44:04 +0000

commit 7a7406ba1d8912719eb7c9eec2d7cd34f49dfac0
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:32:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:32:58 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2290b7114f..35943589fc 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1085,6 +1085,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1579,6 +1588,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:44:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:44:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420295.665060 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYN-0006oi-Jy; Tue, 11 Oct 2022 13:44:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420295.665060; Tue, 11 Oct 2022 13:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYN-0006oa-GK; Tue, 11 Oct 2022 13:44:15 +0000
Received: by outflank-mailman (input) for mailman id 420295;
 Tue, 11 Oct 2022 13:44:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYM-0006oP-MU
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYM-0002wR-Lj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYM-0003LJ-Km
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=5uORATO/WKmZ0dp1cee6AQ+I4x6JNAaCESRoPtHr0b8=; b=fOCOUNmMeBkz4EPWxSuqUabguW
	TPpxHJwkDe4ci8AjBbLxFfC55veC6c1jOLOoz/lFf/PVKGy76mWH++VuMxJ/spZonbde045btF+rJ
	R8+fTjvAxHdgYOvU+kLpiUrXQP96BtBJdmjYokWXsYAk9zcaSDuvEPHFSd4E8BKLL14Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oiFYM-0003LJ-Km@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:44:14 +0000

commit 9c975e636ed2782d4fd8b2b76126bdfb81f386cc
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:34:25 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:34:25 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c     | 10 ++++++++--
 xen/arch/arm/p2m.c        | 47 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/p2m.h | 13 +++++++++++--
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 849fef2f1e..caa625bd16 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -774,10 +774,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -979,6 +979,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1029,6 +1030,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 35943589fc..62f4d31dc1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1496,17 +1496,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index ea8a03449d..f40f82794d 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -183,8 +183,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:44:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420296.665065 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYX-0006rb-Lc; Tue, 11 Oct 2022 13:44:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420296.665065; Tue, 11 Oct 2022 13:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYX-0006rT-I8; Tue, 11 Oct 2022 13:44:25 +0000
Received: by outflank-mailman (input) for mailman id 420296;
 Tue, 11 Oct 2022 13:44:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYW-0006rL-Rx
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYW-0002wd-PO
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYW-0003Ls-OA
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=jym/LRMfLdaCiEpJwzo7qMQK7aGoTJy/iUyFtCbvNSo=; b=ofGHpfKClh3RUkXqRsB+fG5DXu
	IJmnt3p9RWb7XOrNP7Ot8vCm7KnvyxHcvjc+O+r0+8LXllYtd1YQuz25niCPTY8WFsJmOAffm1Jlt
	XHXVszhnqyiqi9jrzYeLNeNi9/qP7HGuY7VoSRqg3rSZhLzVR0xTtqnAogNBHsjWwnqc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oiFYW-0003Ls-OA@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:44:24 +0000

commit 54b6eab0e4450a39ebe11b8f2faeaeb09c6e774a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:34:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:34:41 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 1349de01d4..395fd32559 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -540,18 +540,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index be5e9c031a..7ec6466922 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,11 +737,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -751,10 +751,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 773777321f..4436ea2c51 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2686,7 +2686,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2839,7 +2839,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 9be4a9c58e..cfe2e55fcf 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -595,7 +595,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:44:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420297.665068 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYh-0006uM-MK; Tue, 11 Oct 2022 13:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420297.665068; Tue, 11 Oct 2022 13:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYh-0006uE-Jf; Tue, 11 Oct 2022 13:44:35 +0000
Received: by outflank-mailman (input) for mailman id 420297;
 Tue, 11 Oct 2022 13:44:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYg-0006u4-Sl
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYg-0002wq-S6
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYg-0003Mc-RQ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=077c/YR2oFhd1ck+5fyyUP6SS04DrpvWZaoRKwXzP5U=; b=d+FujsysMAL7BmvR50CFInrGlh
	prrvjme++Q1elMR5UH7OFUUmEVLfgDOL5iu39bfK5Bkwind7/KistNeHmXhrORFEJyod/Uxba9Ehf
	Wf6juPrsDjqAJtMhayt2rH/lIL//IKYDoGQzb5FxLjwvD6rvbOjcv12vjcrVihN4tjcc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oiFYg-0003Mc-RQ@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:44:34 +0000

commit 3163e34f6abad70160711ef60c21645355f509fb
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:34:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:34:59 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 395fd32559..3d626fe149 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -404,8 +405,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -758,6 +764,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -766,6 +775,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:44:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:44:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420298.665072 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYr-0006wr-O1; Tue, 11 Oct 2022 13:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420298.665072; Tue, 11 Oct 2022 13:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFYr-0006wh-LC; Tue, 11 Oct 2022 13:44:45 +0000
Received: by outflank-mailman (input) for mailman id 420298;
 Tue, 11 Oct 2022 13:44:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYq-0006wa-W1
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYq-0002xr-VB
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFYq-0003NU-UL
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=9mcMlY4UlrpPDF2rSq55YNgaMVKfvw4PZXr/eIgHO9Q=; b=eXbO7/kNd7INY6VeE3nMXxX3kL
	1XjSFFKJv7VcCloftib6S/Eu5eWznoSP91U2xuoZsTiO2Z1+KbgkyoEB6hMbGEPTR5Wfj4pneYwZ9
	EjTpQ8AkYG/T/OrjitejSefQenBybyCOcvLKU7k6QlGOKY7XivCwJJzJKHDv33mdsrNI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oiFYq-0003NU-UL@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:44:44 +0000

commit 0bab3abf73783da66af8cf7cf7aabb7d86caa035
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:35:43 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:35:43 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/multi.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 99e410d999..c129b8103e 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3854,6 +3854,7 @@ sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
@@ -4007,6 +4008,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
      * current values of the guest's four l3es. */
@@ -4052,6 +4058,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[0]);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:44:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420299.665075 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZ1-0006zk-PL; Tue, 11 Oct 2022 13:44:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420299.665075; Tue, 11 Oct 2022 13:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZ1-0006zc-Mg; Tue, 11 Oct 2022 13:44:55 +0000
Received: by outflank-mailman (input) for mailman id 420299;
 Tue, 11 Oct 2022 13:44:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZ1-0006zS-3G
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZ1-0002y1-2c
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZ1-0003O3-1V
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:44:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=iiqctD8COo99zaJDg+whlHTf/lFtNC49RLsLUL2Ty3c=; b=uWhDoeCdsepunn3rPCwc+IHQ9g
	p/29oEIu6i5tCOZI8B0MupgQr46JjDdLwh7jV4jcx05IobD0imlOn5OUtDoIGFO5PHz1UNmVanf3f
	fn5eGLX5basS5/QCJXhwrE1qSVo2pedod0twIqzxwBkpR/UnyAGQ934zuozA3cdB7vuo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oiFZ1-0003O3-1V@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:44:55 +0000

commit b8f4a5de683efbe402db65483d845573c30dbb3f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:36:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:36:21 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 62 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/multi.c   | 21 ++++++++++----
 xen/arch/x86/mm/shadow/private.h |  3 +-
 3 files changed, 65 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 4436ea2c51..6f71636746 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/shadow.h>
 #include <asm/hvm/ioreq.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -927,14 +928,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -950,7 +952,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -971,7 +974,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -984,7 +987,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -992,9 +1000,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1211,7 +1229,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1229,16 +1247,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1329,7 +1349,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2397,12 +2419,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2464,6 +2487,10 @@ static void sh_update_paging_modes(struct vcpu *v)
         if ( pagetable_is_null(v->arch.hvm.monitor_table) )
         {
             mfn_t mmfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2501,6 +2528,11 @@ static void sh_update_paging_modes(struct vcpu *v)
                 old_mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    old_mode->shadow.destroy_monitor_table(v, old_mfn);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index c129b8103e..aaf56d295e 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1535,7 +1535,8 @@ sh_make_monitor_table(struct vcpu *v)
     ASSERT(pagetable_get_pfn(v->arch.hvm.monitor_table) == 0);
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
 
     {
         mfn_t m4mfn;
@@ -3067,9 +3068,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
@@ -3864,7 +3870,12 @@ sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = sh_make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 3fd3f0617a..e2100f0f34 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -351,7 +351,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:45:05 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:45:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420300.665080 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZB-00073C-TE; Tue, 11 Oct 2022 13:45:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420300.665080; Tue, 11 Oct 2022 13:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZB-000735-QW; Tue, 11 Oct 2022 13:45:05 +0000
Received: by outflank-mailman (input) for mailman id 420300;
 Tue, 11 Oct 2022 13:45:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZB-00072v-6o
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZB-0002yb-65
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZB-0003Om-4x
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=JoXGH+7Try5S4u+JFJZdIzRwPGNMb+Kpf612qB0M9P4=; b=iT5tm/3NI0d75YxCW1t3ZlSz9S
	pO1v7fBgmvIcG906mjIgQhWfbXSG0J8h7I1pe3ZNkXsFCRXRQLLEfVmGZWyA5WL5tkoOs6EoEgO2y
	SZmvkMZ0HJAqp1WY9DzM2kzp2lekjUacqFjlqwWzn9yY/jdAD4U6gmEYqKdTFlGGcn9s=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oiFZB-0003Om-4x@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:45:05 +0000

commit 9b5a7fd916a74295886a7d473c311e3c7e254e54
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:37:32 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:37:32 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d626fe149..7eeeb1f472 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -244,6 +244,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -280,7 +283,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 6f71636746..8eed7e72fe 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -938,6 +938,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -988,7 +992,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1002,10 +1006,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1231,6 +1238,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:45:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420301.665084 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZL-00077y-Uy; Tue, 11 Oct 2022 13:45:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420301.665084; Tue, 11 Oct 2022 13:45:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZL-00077q-SA; Tue, 11 Oct 2022 13:45:15 +0000
Received: by outflank-mailman (input) for mailman id 420301;
 Tue, 11 Oct 2022 13:45:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZL-00077i-AA
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZL-0002z2-9O
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZL-0003PJ-8N
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=idnPdPYVQmtSlZbygt21UfaXyAxvDnGv0oAdTUgQld0=; b=fhiFaUCVbXCxEoB2KIt5PhbjmQ
	50XnTulTh3Iy8zw+WdoRjI+5ODuORbBWHWtuotkXinEL4MMdjpByansiCitwTZmwlZdW5fKgWFt5E
	ymyHgd3EdOj4OpZQMRnEkmrAxOhVVFT0tjrpHpunM7rtPdad7zpBfl8Nhn9TQOzbrBJg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oiFZL-0003PJ-8N@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:45:15 +0000

commit fc1098471822d80a35c6f1ac1ec8c7b45caf6eab
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:38:09 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:38:09 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 7eeeb1f472..febd47e32d 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -264,6 +264,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8eed7e72fe..730c82dcb1 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1180,6 +1180,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1222,11 +1223,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1296,9 +1318,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:45:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420302.665088 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZW-0007CL-0C; Tue, 11 Oct 2022 13:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420302.665088; Tue, 11 Oct 2022 13:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZV-0007CB-Tj; Tue, 11 Oct 2022 13:45:25 +0000
Received: by outflank-mailman (input) for mailman id 420302;
 Tue, 11 Oct 2022 13:45:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZV-0007C5-DO
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZV-0002zC-Cg
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZV-0003Pi-Bj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Ckhpf6n8ZEaj7gFzRfX0CJc0k9Dad3Uk2vPEuJU4snU=; b=YM9K3Y33uqqEvQNepSzldkdb6Y
	VBssCF4Gtfsva8KF0sH7GhNtZUzrBYeyGjOEGplRJZ3618j0PIyhFDyKJDfnZTQD1Sg7RG+VvQa9U
	GL1HJCAGT8FSmZALqIOX/4YneWF/2HeR5EfEZIUIdsbY6BTIpcBSn4fjFsnIIeGtd/aI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oiFZV-0003Pi-Bj@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:45:25 +0000

commit f90615ce03c14b5288bdacd796ada23b4e9d0f7b
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:38:30 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:38:30 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 46 +++++++++++++++++++++++++++--------------
 xen/arch/x86/mm/shadow/common.c | 16 ++++++++++++++
 3 files changed, 46 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 3658e50d56..4fb78d38e7 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2120,12 +2119,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index febd47e32d..be46d6e01f 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -545,24 +546,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -577,6 +562,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -584,6 +571,7 @@ void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
     mfn_t mfn;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -605,6 +593,32 @@ void hap_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
     if ( d->arch.paging.hap.total_pages != 0 )
     {
         hap_set_allocation(d, 0, preempted);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 730c82dcb1..bedb779ca4 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2795,6 +2795,19 @@ void shadow_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2913,6 +2926,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:45:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:45:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420303.665092 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZh-0007Fc-1r; Tue, 11 Oct 2022 13:45:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420303.665092; Tue, 11 Oct 2022 13:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZg-0007FU-VQ; Tue, 11 Oct 2022 13:45:36 +0000
Received: by outflank-mailman (input) for mailman id 420303;
 Tue, 11 Oct 2022 13:45:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZf-0007FF-H3
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZf-0002zG-GI
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZf-0003QT-FH
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=UZH1GYHix86i5Gzs7MGaWjQ0KXN5lzDkV53VsSDlDhQ=; b=1ZM59H/wKEHMjWImaWs/pL2aTW
	AYSVM3xqKAlazKpr44KFYxY15wWsu4lIgTfwtWKS71vfBuMeQsQJlTf0LTE20SlF2iJd8rDlKcSOK
	MeFyog8FzTtOt4wbCFFo0ZRaIITv7VrDT3D2ODkM9CrlIXKzmApAG6AGYE+Xd4r5XpvQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oiFZf-0003QT-FH@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:45:35 +0000

commit 804f83bfba8e73ed99a2f839c6731fa2aa9fb7bb
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:38:43 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:38:43 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index be46d6e01f..406c237eed 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -547,17 +547,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -608,14 +608,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 7ec6466922..39cfce47a3 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,12 +737,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -761,8 +762,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index bedb779ca4..ba2ef80778 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2749,8 +2749,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2797,7 +2801,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
 
     paging_unlock(d);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2916,7 +2922,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index cfe2e55fcf..3136fcb040 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -595,7 +595,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:45:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420304.665096 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZr-0007Kv-3Z; Tue, 11 Oct 2022 13:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420304.665096; Tue, 11 Oct 2022 13:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFZr-0007Kn-0g; Tue, 11 Oct 2022 13:45:47 +0000
Received: by outflank-mailman (input) for mailman id 420304;
 Tue, 11 Oct 2022 13:45:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZp-0007K5-K7
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZp-0002zM-JN
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZp-0003Qu-Ib
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4ol+Ezq8cuoqPwyHVt6Xr5X8L6YPUSLUO/7mWGp8rpc=; b=l/8A6RY8fpmBBg+tLXCm8iIphS
	6B9lsCWetEbvirrxwlhuMe41ehxwK/LNWdTGTEn8Q6srCl38O39CbiGBxLqMkRPQeJbzmOs9etuH6
	k718xCKdzr5iXyGEO2SNLhkJZlZ6RiiOuLP3ov5SRzRuPvDq7Qkihgp4xrvIVYpAHxdY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1oiFZp-0003Qu-Ib@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:45:45 +0000

commit e3b66e5cba89fc0b59c9a116e7414388d45e04a0
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:39:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:39:00 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in  |  5 +++++
 tools/libxl/libxl_arch.h  |  4 ++++
 tools/libxl/libxl_arm.c   | 12 ++++++++++++
 tools/libxl/libxl_utils.c |  9 ++-------
 tools/libxl/libxl_x86.c   | 12 ++++++++++++
 5 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..2224080b30 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1803,6 +1803,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map. Users should adjust this value if bigger P2M pool size is
+needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 6a91775b9e..b09f868490 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -83,6 +83,10 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..f4b3dc8e71 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -153,6 +153,18 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index b039143b8a..e18b1524ef 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 07c7b05e0d..0ad455301d 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -852,6 +852,18 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
 
 /*
  * Local variables:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:45:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420305.665100 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFa1-0007OK-6o; Tue, 11 Oct 2022 13:45:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420305.665100; Tue, 11 Oct 2022 13:45:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFa1-0007OC-49; Tue, 11 Oct 2022 13:45:57 +0000
Received: by outflank-mailman (input) for mailman id 420305;
 Tue, 11 Oct 2022 13:45:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZz-0007O0-N3
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZz-0002za-MJ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFZz-0003Ra-LX
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:45:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=uJpN6Hj92L9c9q74HyfpLRQhZ/JaCm0+t7Hu9yTcHYE=; b=irrOYPwOgxu/elaYmHc73WEe0l
	CG/GJ7LCu/I6cLXMOADGPLUtPjLVAYfMy3EZaOMkWbsO3ODOc+5oIEiTzaBas9OygVSE0LBydS7qY
	xDfM/Qfy+CcfOLpxQO6Wr6ohQ9GzIvl2Lohf8hRMvj2/q/zmzBGtWJMXYpW0HJ8uEvtY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oiFZz-0003Ra-LX@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:45:55 +0000

commit fd688b06a57a327dc5dbda106a104a2af5e1aa2b
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:39:18 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:39:18 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 62f4d31dc1..0c331a36a5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -49,6 +49,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1568,7 +1654,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9c4db75f08..96a878d334 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -42,6 +42,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -53,6 +61,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index f40f82794d..b733f55d48 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -209,6 +209,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:46:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420306.665104 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFaB-0007Qv-8R; Tue, 11 Oct 2022 13:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420306.665104; Tue, 11 Oct 2022 13:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFaB-0007Qo-5e; Tue, 11 Oct 2022 13:46:07 +0000
Received: by outflank-mailman (input) for mailman id 420306;
 Tue, 11 Oct 2022 13:46:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFa9-0007Qb-Q3
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFa9-00033r-PE
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFa9-0003Sf-OY
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RfuVlio2+WI1fzWNzwh5RrgocbHVSzoEMWb+K3L0LoU=; b=5WLexJgJdpmEhMq11gXyLhNHvT
	OwHu3fAj5MinV9VVMvUZiM3a4Hp5GVeNrlk2XYusx96Hbga0PVz1/6JwC2BmABrDWznWzYmBVEtTJ
	GJN6bUohFsYCkdhhNCDFjvbn3Ev3/Z2RTI5ODHusboizHBGqGz35GF9K7CeKcDGlTplQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oiFa9-0003Sf-OY@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:46:05 +0000

commit 4220eac3799f46ba84316513606a33e1ea33fb4e
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:42:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:42:00 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libxl/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c   | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index f4b3dc8e71..025df1bfd0 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -130,6 +130,18 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
 int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9da88b8c64..ef1299ae1c 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -45,11 +45,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:46:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:46:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420307.665108 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFaL-0007Tc-9w; Tue, 11 Oct 2022 13:46:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420307.665108; Tue, 11 Oct 2022 13:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFaL-0007TU-7J; Tue, 11 Oct 2022 13:46:17 +0000
Received: by outflank-mailman (input) for mailman id 420307;
 Tue, 11 Oct 2022 13:46:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFaJ-0007TE-TF
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFaJ-00037P-SZ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFaJ-0003TF-Rn
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=f7cawHHmqnkEsnXFzLxZ7Sam1no5I/+U/Jpk36vSypc=; b=D4bvEJ1juYxx9rRtMXm+B97wGR
	2ZnNWvKlydXgvqgv/6HjmaaJF/XT2hKohfwDLJ7EurJLjl3wznAMGDO7OfPd9bnte1CO6lHHNBjTG
	nNjth5bK+Abdrul9VdmPhfU+Ck8qIzbkgl/hqmsJCA2fqXUJrYqck6o44bnb0/1ve15A=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oiFaJ-0003TF-Rn@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:46:15 +0000

commit 7d64fb52a57109147dd4180e3a3ba4b5e735a117
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:42:19 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:42:19 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..470c9491a7 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -164,6 +164,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index caa625bd16..aae615f7d6 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -980,6 +980,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1035,6 +1036,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f49dbf1ca1..3c05fa5ac7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2333,6 +2333,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2424,6 +2439,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2433,6 +2450,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index ef1299ae1c..dab3da3a23 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -48,6 +48,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -64,9 +67,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0c331a36a5..13b06c0fe4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -49,6 +49,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -750,7 +798,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -870,7 +918,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -894,7 +942,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1610,7 +1658,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1634,6 +1682,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:46:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:46:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420308.665112 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFaV-0007Wd-Bn; Tue, 11 Oct 2022 13:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420308.665112; Tue, 11 Oct 2022 13:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFaV-0007WV-8r; Tue, 11 Oct 2022 13:46:27 +0000
Received: by outflank-mailman (input) for mailman id 420308;
 Tue, 11 Oct 2022 13:46:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFaU-0007WH-04
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFaT-00037Z-Vh
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFaT-0003Th-Uw
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:46:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=NJ/8QN/YSMRiIyvTMa3DgsVOwUaG9YWkIYFSx6Ga/vU=; b=oRqqfHReZXKp4AP33pVrFQx5vG
	h9rYYgQahSzOILss7UvvkE4rI832M4oEd1sxurUtzIh4mN5ox8rtlTva9Io+JiqA+ICo5gHJLZy0P
	rDEXxcZazXt9y1dJ2rs2Y1Ef3SLDqiDo0/AonhkPfipKjahgYAH/2vUSDhGattBrP8bQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oiFaT-0003Th-Uw@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:46:25 +0000

commit 6e5608d1c50e0f91ed3226489d9591c70fa37c30
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:42:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:42:48 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 34498d4652..576b1d34dc 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2617,9 +2617,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2656,11 +2655,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
-            fixup_status_for_copy_pin(rd, act, status);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
+            fixup_status_for_copy_pin(rd, act, status);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:55:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420309.665116 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFis-0000Pt-23; Tue, 11 Oct 2022 13:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420309.665116; Tue, 11 Oct 2022 13:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFir-0000Pl-VH; Tue, 11 Oct 2022 13:55:05 +0000
Received: by outflank-mailman (input) for mailman id 420309;
 Tue, 11 Oct 2022 13:55:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFiq-0000HG-Ti
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFiq-0003G3-QT
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFiq-0004Al-PC
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=khANW63Y4JqdeSa61Tc3tHwTMNWKf7GYxTmLHvY3l+8=; b=2RJGR0SGcrFwlWnsuhJoffs6hX
	DXzvJGWywdFuBFAF4YSmCDC0rMLZI7cxZI0utR10oBzciRx4tacUdfXNcWU2D1mPiRX7RDFLA6Gfh
	rEMWi95IHjyh2HkN1S5MAwwnhVNwJOQgUxdhtX0Sdkkzgg9U1Gu8tsUILxUNwLq8qLUk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oiFiq-0004Al-PC@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:55:04 +0000

commit 5475195ec490a1cbe226ebe7b709119928673cc8
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:47:15 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:47:15 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 993fe4ded2..ff74577638 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1089,6 +1089,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1578,6 +1587,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:55:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:55:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420310.665120 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFj2-0000Ry-3L; Tue, 11 Oct 2022 13:55:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420310.665120; Tue, 11 Oct 2022 13:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFj2-0000Rm-0a; Tue, 11 Oct 2022 13:55:16 +0000
Received: by outflank-mailman (input) for mailman id 420310;
 Tue, 11 Oct 2022 13:55:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFj0-0000Rc-Uc
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFj0-0003GV-Tl
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFj0-0004BR-Ss
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=tNKODy+KLBLKOOqj6gLJp9J7UyBeZ8vVG9YdulMOcgk=; b=imun4GDbPo5T0YQBeyD0zD191Q
	xvNu7a5Qm1GkFv2KnF6XMg3pQXeqsutMceZij+zErprk4u42Dct+1svXodYW6Tp2VusVVPZEUoOHW
	hic1qNkfd9KE1sJOxGaCFgNYhCw6HNtepvPjBTUc9sct6xg7MkwF1UUk0Az8Kh7zL3yM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oiFj0-0004BR-Ss@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:55:14 +0000

commit 4e38cc1baea00384b208b762bccc624b0e070fed
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:47:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:47:41 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c        | 12 +++++++++--
 xen/arch/arm/p2m.c           | 47 +++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/domain.h |  1 +
 xen/include/asm-arm/p2m.h    | 13 ++++++++++--
 4 files changed, 66 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ddeccb992c..1e24a7dbb4 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -775,10 +775,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -1014,6 +1014,14 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+        d->arch.relmem = RELMEM_p2m;
+        /* Fallthrough */
+
+    case RELMEM_p2m:
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
         d->arch.relmem = RELMEM_done;
         /* Fallthrough */
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ff74577638..42638787a2 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1495,17 +1495,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index f1776c6c08..9b44a9648c 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -62,6 +62,7 @@ struct arch_domain
         RELMEM_xen,
         RELMEM_page,
         RELMEM_mapping,
+        RELMEM_p2m,
         RELMEM_done,
     } relmem;
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5fdb6e8183..20df621271 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -171,8 +171,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:55:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420311.665124 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjC-0000Ul-56; Tue, 11 Oct 2022 13:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420311.665124; Tue, 11 Oct 2022 13:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjC-0000Ud-2A; Tue, 11 Oct 2022 13:55:26 +0000
Received: by outflank-mailman (input) for mailman id 420311;
 Tue, 11 Oct 2022 13:55:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjB-0000UV-2z
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjB-0003Gg-1p
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjA-0004Bs-WD
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=O1Sf+ap2SnHFSctEImLyFdZv5vzzt6JjwZNqdnVjerI=; b=iSIQV4bkex4k3s2WCK1T1+puR7
	my8wni3Q3xOAMZkD08AmvVA2olwEmBCq0DgGkgugDDE18N2kQvVFzrRJFVmk++d5RiSyEyjbL2lny
	aDfp9GyAll10SKqHFoMs8ZHXCNySTVg4OXq3LcBehUD3j9/oXKt+qp+t/XKjxmJTMgHo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oiFjA-0004Bs-WD@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:55:24 +0000

commit 763f965d04c5eb01890f697aaaaa9120d552672a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:48:01 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:48:01 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9aac006d65..c2d425a4b1 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -521,18 +521,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 91f7b7760c..859edfc95b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,11 +737,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -751,10 +751,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dd8d9240ea..68d2679c7a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2684,7 +2684,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2835,7 +2835,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 807dc4b1a9..cab4ca60fa 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -599,7 +599,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:55:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:55:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420312.665128 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjM-0000XV-6K; Tue, 11 Oct 2022 13:55:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420312.665128; Tue, 11 Oct 2022 13:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjM-0000XN-3i; Tue, 11 Oct 2022 13:55:36 +0000
Received: by outflank-mailman (input) for mailman id 420312;
 Tue, 11 Oct 2022 13:55:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjL-0000XE-5l
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjL-0003Gq-4z
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjL-0004CU-42
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=V8zbjABCQljucXtCNBkax3ZZRXp52RkTl9qP5nnz5jc=; b=tMmicsRhIBQ8e1v/eYUZRgBAEk
	t+9ct3wOQehN24BE6IFOvk76dyyEV/GqsaoAihbdAvAvsXC6V8LLlAhnNbLiAKLIYpeanvQg0h39c
	JiKiRzDhUA+KuGFnRmXbaj5KOq0SsimvLsxB7eR+D1bVnaf/TkbS7VLOQFH/bUYWaVYs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oiFjL-0004CU-42@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:55:35 +0000

commit 0021c269786e0442d6f922d110d957867fff421d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:48:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:48:23 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index c2d425a4b1..d3931b4e49 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -693,6 +699,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -701,6 +710,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:55:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:55:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420313.665131 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjW-0000aB-7j; Tue, 11 Oct 2022 13:55:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420313.665131; Tue, 11 Oct 2022 13:55:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjW-0000a3-5B; Tue, 11 Oct 2022 13:55:46 +0000
Received: by outflank-mailman (input) for mailman id 420313;
 Tue, 11 Oct 2022 13:55:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjV-0000Zl-8z
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjV-0003H2-8D
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjV-0004D0-7I
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=9ASBpzCNLp2eFJYn2qFVMjmxivLElFabTL1rWqpLtGo=; b=wu9YvJ1uTuEXS39fTGqIGAiamu
	4lB9RKy5nYOK/+ick3Ir3unzC7u4WBV4CYaJJ3LOMbrIudS/X7LZcVPoKr1JnEeF2k+q7vg2ISpXp
	K5cKAsgaczvw4IdZ2AVMKnE9cJ3J7BObY4CDW65cQyIZskwmmL+dFlSLoRlp5RWUhsgQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oiFjV-0004D0-7I@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:55:45 +0000

commit aa7891098cc46a7a11b2d823cd8386be8b04c453
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:48:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:48:59 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/multi.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 61e9cc951e..bb78b387eb 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3861,6 +3861,7 @@ sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
@@ -4014,6 +4015,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         flush_tlb_mask(d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
      * current values of the guest's four l3es. */
@@ -4059,6 +4065,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         flush_tlb_mask(d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[0]);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:55:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:55:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420314.665136 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjh-0000dA-9J; Tue, 11 Oct 2022 13:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420314.665136; Tue, 11 Oct 2022 13:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjh-0000d2-6m; Tue, 11 Oct 2022 13:55:57 +0000
Received: by outflank-mailman (input) for mailman id 420314;
 Tue, 11 Oct 2022 13:55:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjf-0000cn-CC
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjf-0003HC-BQ
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjf-0004Db-Ad
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:55:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=1DPOiahzEwTxHm/Y0IrND8ZzvtX6HPbBN9V4UNtDBXs=; b=dLJnYnCHW2QOOe2HkZWe1nEUoV
	rIemKn/OcfcVbD6gBqgPA0GlkrELSL3UMyyvc2DJ0PJP3rCxDVr1cHPXqUActhqD4goX8bU8NSrmo
	WWHnf5KVsn1C86pWpNhVNGJECWb4xDyK6swqT1FqYWXaupqvliHjGIMiIvAGhQ4BMNFI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oiFjf-0004Db-Ad@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:55:55 +0000

commit 181ff7aced0e2afec4cfa57e015d01e0a0b3be59
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:49:18 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:49:18 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 62 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/multi.c   | 21 ++++++++++----
 xen/arch/x86/mm/shadow/private.h |  3 +-
 3 files changed, 65 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 68d2679c7a..ab8cf7aa8c 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/shadow.h>
 #include <asm/hvm/ioreq.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -896,14 +897,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -919,7 +921,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -940,7 +943,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     flush_tlb_mask(d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -953,7 +956,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    flush_tlb_mask(d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -961,9 +969,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1186,7 +1204,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1204,16 +1222,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1304,7 +1324,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2396,12 +2418,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2463,6 +2486,10 @@ static void sh_update_paging_modes(struct vcpu *v)
         if ( pagetable_is_null(v->arch.monitor_table) )
         {
             mfn_t mmfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2500,6 +2527,11 @@ static void sh_update_paging_modes(struct vcpu *v)
                 old_mfn = pagetable_get_mfn(v->arch.monitor_table);
                 v->arch.monitor_table = pagetable_null();
                 new_mfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    old_mode->shadow.destroy_monitor_table(v, old_mfn);
+                    return;
+                }
                 v->arch.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index bb78b387eb..a58493fb01 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1524,7 +1524,8 @@ sh_make_monitor_table(struct vcpu *v)
     ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0);
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
 
     {
         mfn_t m4mfn;
@@ -3052,9 +3053,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
@@ -3871,7 +3877,12 @@ sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = sh_make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 3217777921..e3f91d3576 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -347,7 +347,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:56:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:56:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420315.665140 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjr-0000g7-Ca; Tue, 11 Oct 2022 13:56:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420315.665140; Tue, 11 Oct 2022 13:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFjr-0000fz-9v; Tue, 11 Oct 2022 13:56:07 +0000
Received: by outflank-mailman (input) for mailman id 420315;
 Tue, 11 Oct 2022 13:56:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjp-0000fh-Fk
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjp-0003HZ-F0
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjp-0004ET-Dj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=k82vty2jCsy94IyJVL2mnOkjlfYVFqaepTPNhqpmOdM=; b=vQwEqrtS8CLFAWbxmWYKgKcPel
	PkdVyBhpbxjsn3MO+ls5ibAWkkCxkyROAo1LyRSkSYlO3TkMjDdTJrfBYthH3gMLB2ilNhzRHxNOY
	sRQ+ldhAuuP/b58Jv9pfmbe9ghcI4x3oUBfVGtqt4fRzivB9r/SnAn38ufXwlgpnzADE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oiFjp-0004ET-Dj@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:56:05 +0000

commit 08eec20dc0550316dad64cdc63fee2371702f31f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:49:35 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:49:35 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d3931b4e49..cee8caa7aa 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -244,6 +244,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -280,7 +283,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ab8cf7aa8c..05d20b8b03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -907,6 +907,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -957,7 +961,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     flush_tlb_mask(d->dirty_cpumask);
 
@@ -971,10 +975,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1206,6 +1213,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:56:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420316.665144 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFk1-0000j4-E6; Tue, 11 Oct 2022 13:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420316.665144; Tue, 11 Oct 2022 13:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFk1-0000iw-BS; Tue, 11 Oct 2022 13:56:17 +0000
Received: by outflank-mailman (input) for mailman id 420316;
 Tue, 11 Oct 2022 13:56:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjz-0000iV-Iv
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjz-0003I9-IF
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFjz-0004F3-HL
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Cz7sDBRVGAW7HnA1n3csWROuh4ZTimKWEslpBjYBaiM=; b=ekcQM2JKfWT3X26Lno5O4K8SL8
	Fp0yobm41Y+D110KKeYOEu7t302g5sTmCLYMeSPVwPHsaN8027+AQkSvzbBx7tBAS8ECOEHmbacnZ
	2HelSfwTwwTcDojncEp+NSZcN8DHXN14DGmsQnyquL9KpExHNaZSk0QWZb4Jd7Ae9Fcs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oiFjz-0004F3-HL@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:56:15 +0000

commit 6e537d36943e5b99afe6194b7fc147610bcf9fba
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:49:52 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:49:52 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index cee8caa7aa..417b6ef37c 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -264,6 +264,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 05d20b8b03..c178b9a5d8 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1155,6 +1155,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1197,11 +1198,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1271,9 +1293,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:56:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:56:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420317.665147 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkB-0000mV-G8; Tue, 11 Oct 2022 13:56:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420317.665147; Tue, 11 Oct 2022 13:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkB-0000mN-Cx; Tue, 11 Oct 2022 13:56:27 +0000
Received: by outflank-mailman (input) for mailman id 420317;
 Tue, 11 Oct 2022 13:56:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFk9-0000ly-MI
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFk9-0003IJ-LW
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFk9-0004Fd-Ka
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=O/FrFlClYa+LxhLoX2Yk9ZKg6jPdRtSIzqfU42T5Qkw=; b=65z7LFjVufFgu73iYYqkT99chx
	9HNkT9I4ZxQJ0Iz+9GUamMAvgqbKo0C1iaTbSsuPdiXWhshKpX+qoCL8BIJwB7Jnn0PoIp2+Q6WKm
	JAgF81hK0C8OTGOcAupkXvENr0yM5HKyJ19Q6C1kiptJWLLmeJgCpm6GpRIbtxHDZyhE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oiFk9-0004Fd-Ka@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:56:25 +0000

commit 3e7aa35a56f9e9b42c74724c4083026da8ac9bcd
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:50:10 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:50:10 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 39 +++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/common.c | 16 ++++++++++++++++
 3 files changed, 45 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6199f36514..6996c6b06a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2098,12 +2097,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 417b6ef37c..92b2014534 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -532,18 +533,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -558,6 +549,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -565,6 +558,7 @@ void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
     mfn_t mfn;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -586,6 +580,31 @@ void hap_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
     if ( d->arch.paging.hap.total_pages != 0 )
     {
         hap_set_allocation(d, 0, preempted);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index c178b9a5d8..8679620f18 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2791,6 +2791,19 @@ void shadow_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2909,6 +2922,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:56:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:56:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420318.665153 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkL-0000p5-HP; Tue, 11 Oct 2022 13:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420318.665153; Tue, 11 Oct 2022 13:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkL-0000ox-EZ; Tue, 11 Oct 2022 13:56:37 +0000
Received: by outflank-mailman (input) for mailman id 420318;
 Tue, 11 Oct 2022 13:56:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkJ-0000of-PX
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkJ-0003IR-Oj
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkJ-0004G9-O0
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=5oE/ZG/zOUw2XMzGaWiGsoRmYD5bNMm9U9+v+GLVZSM=; b=mi70uVMWlkNkb0B2QFrftE8/7s
	NbMToZ0+pY7dCmd8cT2z4q9TUuV4riZxhKbpTIUSraQYVjDFbf8ouIAoUjVSmV4zETqAJ01H9DGqL
	Pw4M1plQLxyL/vqs4/p44AgsdQTor1+Eirn30K7GqEH8wOuAeiVs73OWUYKkm2W/Thhc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oiFkJ-0004G9-O0@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:56:35 +0000

commit eed4ef4177b8267f2b6f403db00ed393a371285f
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:50:28 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:50:28 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 92b2014534..34bbe50be0 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -534,17 +534,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -594,14 +594,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 859edfc95b..5bc2e483a3 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,12 +737,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -761,8 +762,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8679620f18..e6af359579 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2747,8 +2747,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2793,7 +2797,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
 
     paging_unlock(d);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2912,7 +2918,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index cab4ca60fa..8ba8cd6a02 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -599,7 +599,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:56:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:56:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420319.665156 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkV-0000rf-J9; Tue, 11 Oct 2022 13:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420319.665156; Tue, 11 Oct 2022 13:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkV-0000rX-G5; Tue, 11 Oct 2022 13:56:47 +0000
Received: by outflank-mailman (input) for mailman id 420319;
 Tue, 11 Oct 2022 13:56:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkT-0000rJ-T2
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkT-0003K2-SG
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkT-0004Ge-RD
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=NOIxy+bc1GZTwPvfFLrVeUiFeuFhsW7EAnqwb/CyP+M=; b=bh6OdDB3msRpvPBS2vfBcj8Boy
	8GP1b9EWn3ITQnv2Hm9+DngFaEa7AcI9rDIgZBRg1ngEWqD0BL+JuOe7vXbkovnCEIhX+yGfsB3Lt
	W3hOhWPTSxguZpRWcmXP+9zYeKMvmfo764XRRPIztDklW0gsKYgT9asy2Vw4wotPr+xM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1oiFkT-0004Ge-RD@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:56:45 +0000

commit 9992c089de1fbb4d3217d2421ca60295998645d7
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:51:26 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:51:26 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in  |  5 +++++
 tools/libxl/libxl_arch.h  |  4 ++++
 tools/libxl/libxl_arm.c   | 12 ++++++++++++
 tools/libxl/libxl_utils.c |  9 ++-------
 tools/libxl/libxl_x86.c   | 12 ++++++++++++
 5 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 245d3f9472..3b297c6a97 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1790,6 +1790,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map. Users should adjust this value if bigger P2M pool size is
+needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 6a91775b9e..b09f868490 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -83,6 +83,10 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..f4b3dc8e71 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -153,6 +153,18 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index b039143b8a..e18b1524ef 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index f34c0edc10..348876e5c0 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -681,6 +681,18 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
 
 /*
  * Local variables:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:56:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:56:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420320.665160 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkf-0000v8-MP; Tue, 11 Oct 2022 13:56:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420320.665160; Tue, 11 Oct 2022 13:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkf-0000v0-Jb; Tue, 11 Oct 2022 13:56:57 +0000
Received: by outflank-mailman (input) for mailman id 420320;
 Tue, 11 Oct 2022 13:56:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFke-0000uj-0L
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkd-0003KD-Vp
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFkd-0004H4-UX
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:56:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=F8f8kZRzNTrkN4EgQOzgtEGW8N3InM8nckamfnDmIP8=; b=RFz2fmTeOfAv6g7kQF6M05DD4e
	ByAnTMJTFItYYAr2OFCt3Q4LOD6jSIEPHg6p08blAaFX/VODXRrqn3fo0IOOfm/r2E/UjRlGU09I9
	Ka1y/3QM/CEFg4MkDuPe6ikJAiHSkxtP+8Qtdz4lkiW74xXpOXmEUkNh0sdKmiMIiF0Y=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oiFkd-0004H4-UX@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:56:55 +0000

commit 2ae9bbef0f84a025719382ffcf44882b76316d62
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:51:45 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:51:45 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 42638787a2..7d6fec7887 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -53,6 +53,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1567,7 +1653,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9b44a9648c..7bc14c2e9e 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -42,6 +42,14 @@ struct vtimer {
         uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -53,6 +61,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 20df621271..b1c9b947bb 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -197,6 +197,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:57:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420321.665164 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkp-0000zW-O1; Tue, 11 Oct 2022 13:57:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420321.665164; Tue, 11 Oct 2022 13:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFkp-0000zO-L7; Tue, 11 Oct 2022 13:57:07 +0000
Received: by outflank-mailman (input) for mailman id 420321;
 Tue, 11 Oct 2022 13:57:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFko-0000zC-3U
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFko-0003KU-2l
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:06 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFko-0004Hd-1w
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:06 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+bVUYzKFfC9fJbGPxgUfj9mVIzILby5YU2vepZUHygc=; b=LEIuffxGJd+43hkC0Vy0TQM35R
	bTDAVy5HNxfiZ/LgA0C70WcREFGaDuRGXnbE+6XjcypVTKjI0Zg6xGvqtyoXZ1ph5m1xMUr+WDjcA
	AcLaylJWW9LPCCB5NzQTZp0NxfI5LM9T6d+JVY6scjkVQXkgr+rxIEHIrzm1yqgGpUAA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oiFko-0004Hd-1w@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:57:06 +0000

commit e6b1e3892b685346490eded1f6b6f5392b1020b0
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:52:02 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:52:02 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libxl/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c   | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index f4b3dc8e71..025df1bfd0 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -130,6 +130,18 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
 int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9da88b8c64..ef1299ae1c 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -45,11 +45,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:57:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:57:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420322.665167 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFky-000126-PQ; Tue, 11 Oct 2022 13:57:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420322.665167; Tue, 11 Oct 2022 13:57:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFky-00011y-Ma; Tue, 11 Oct 2022 13:57:16 +0000
Received: by outflank-mailman (input) for mailman id 420322;
 Tue, 11 Oct 2022 13:57:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFky-00011o-6a
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFky-0003Ku-5t
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:16 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFky-0004I2-5A
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:16 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=zYUz09iMzB+rep4G9KRTL/QgaYswd4MwsbLq+iPl9qQ=; b=KjOgFl/80MEmTsuEYkrr37601J
	lkRgB8MlsVmngdltAhewgMu+yriX43c8KfcVJoLKuPANUjMa3pKOCen6VdK1rr1MlKEz1H9+brK5l
	Ffyd4k6MrPuUtAGgpWfNzjYSGZFJyvAJOEj+49tiJ9wPLOm3Sg9rXS78a0+5Cti46tvo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oiFky-0004I2-5A@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:57:16 +0000

commit 867fcf6ca2e6a5dfb490bc5a1bd9b36d8ba88531
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:52:18 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:52:18 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  8 +++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/domain.h          |  1 +
 6 files changed, 121 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..470c9491a7 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -164,6 +164,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 1e24a7dbb4..31abe7d6f9 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1022,6 +1022,14 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+        d->arch.relmem = RELMEM_p2m_pool;
+        /* Fallthrough */
+
+    case RELMEM_p2m_pool:
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
         d->arch.relmem = RELMEM_done;
         /* Fallthrough */
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index ce7f61e825..eb859600e5 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2327,6 +2327,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2418,6 +2433,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2427,6 +2444,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index ef1299ae1c..dab3da3a23 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -48,6 +48,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -64,9 +67,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7d6fec7887..3196690544 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -53,6 +53,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -754,7 +802,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -874,7 +922,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -898,7 +946,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1609,7 +1657,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1633,6 +1681,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7bc14c2e9e..dc5b26d15e 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -73,6 +73,7 @@ struct arch_domain
         RELMEM_page,
         RELMEM_mapping,
         RELMEM_p2m,
+        RELMEM_p2m_pool,
         RELMEM_done,
     } relmem;
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 11 13:57:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Oct 2022 13:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.420323.665172 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFl8-00015C-Qp; Tue, 11 Oct 2022 13:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 420323.665172; Tue, 11 Oct 2022 13:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiFl8-000153-O8; Tue, 11 Oct 2022 13:57:26 +0000
Received: by outflank-mailman (input) for mailman id 420323;
 Tue, 11 Oct 2022 13:57:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFl8-00014p-9c
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFl8-0003L7-8s
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:26 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiFl8-0004IW-87
 for xen-changelog@lists.xenproject.org; Tue, 11 Oct 2022 13:57:26 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ov3zOv5hTT5xodqTZZPH1B8ke8Wi2I2Imm9m7ube62k=; b=DKoBKru6qrVeVcn1Sbgm9ZpLfO
	zlEmUXurCzEvPeVKiHovLb77iuc2IVXyzBSPnJ/v/T3ldexS7UCYeXFeZAMv6YsqcpSOAeHrWCyJU
	ajHJtXC+zbR3ANr9AoiRAAS4SDGbhK81p1P7Ofbid22NPGQduznP3XVbjzFD5A7aN638=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oiFl8-0004IW-87@xenbits.xenproject.org>
Date: Tue, 11 Oct 2022 13:57:26 +0000

commit 042de0843936b690acbc6dbcf57d26f6adccfc06
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:53:28 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:53:28 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 709509e0fc..d242c08038 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2584,9 +2584,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2623,11 +2622,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
-            fixup_status_for_copy_pin(rd, act, status);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
+            fixup_status_for_copy_pin(rd, act, status);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Wed Oct 12 15:44:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Oct 2022 15:44:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.421274.666497 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oidtt-0007qd-Je; Wed, 12 Oct 2022 15:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 421274.666497; Wed, 12 Oct 2022 15:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oidtt-0007qT-GU; Wed, 12 Oct 2022 15:44:05 +0000
Received: by outflank-mailman (input) for mailman id 421274;
 Wed, 12 Oct 2022 15:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oidts-0007qN-63
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oidts-000065-49
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oidts-0008T6-2n
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=5E2ousnHDGYtyxquCzHOtiyji/zJmYq2os9zn6r2j2E=; b=G9UJmO+K3Kqwzsq5T9FHZKVZjo
	JSrE8h5RPP4qh8nRxNWYDYyxHoXx/Ww719RLOwi0xxTkPuieHr7ZVfOY0oiEJ1gGaWy+/JDxx10Q1
	t585SWpXFa0+/rs+dMyX/ZhpgxwM6UwljQF8HsVoh1I0zk8QEkLgHYPIqH+GxW4TkFL8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] tools/tests: fix wrong backport of upstream commit 52daa6a8483e4
Message-Id: <E1oidts-0008T6-2n@xenbits.xenproject.org>
Date: Wed, 12 Oct 2022 15:44:04 +0000

commit 0d233924d4b0f676056856096e8761205add3ee8
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Wed Oct 12 17:31:44 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:31:44 2022 +0200

    tools/tests: fix wrong backport of upstream commit 52daa6a8483e4
    
    The backport of upstream commit 52daa6a8483e4 had a bug, correct it.
    
    Fixes: 3ac64b375183 ("xen/gnttab: fix gnttab_acquire_resource()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 tools/tests/resource/test-resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
index bf485baff2..51a8f4a000 100644
--- a/tools/tests/resource/test-resource.c
+++ b/tools/tests/resource/test-resource.c
@@ -71,7 +71,7 @@ static void test_gnttab(uint32_t domid, unsigned int nr_frames)
     res = xenforeignmemory_map_resource(
         fh, domid, XENMEM_resource_grant_table,
         XENMEM_resource_grant_table_id_status, 0, 1,
-        (void **)&gnttab, PROT_READ | PROT_WRITE, 0);
+        &addr, PROT_READ | PROT_WRITE, 0);
 
     if ( res )
     {
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Wed Oct 12 15:44:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Oct 2022 15:44:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.421275.666501 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oidu3-0007sL-Kp; Wed, 12 Oct 2022 15:44:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 421275.666501; Wed, 12 Oct 2022 15:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oidu3-0007sD-I9; Wed, 12 Oct 2022 15:44:15 +0000
Received: by outflank-mailman (input) for mailman id 421275;
 Wed, 12 Oct 2022 15:44:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oidu2-0007s5-8M
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oidu2-000069-7Z
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oidu2-0008Tf-6F
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Z1Dvpu0V9qNj/X8LUvRqiJSWp+xVATGfz5yoiCeOfFI=; b=AYEAMJ8Pnjuidrbu3d5kpjP1J5
	3g7wp20Nc2LwliyiSViuOfB4bf95eGaD0kr4G/7LvILfAPn11wdOmSzIC9oCeZ5KBU2yLsk2nzEqH
	194Nx6Uk7SCW0LczXYKSnpj6M6YeA52KHnwwiGD1wMdqMMLt0NodLce2m8I+kk+eYFmI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] libxl/Arm: correct xc_shadow_control() invocation to fix build
Message-Id: <E1oidu2-0008Tf-6F@xenbits.xenproject.org>
Date: Wed, 12 Oct 2022 15:44:14 +0000

commit 816580afdd1730d4f85f64477a242a439af1cdf8
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:33:40 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:33:40 2022 +0200

    libxl/Arm: correct xc_shadow_control() invocation to fix build
    
    The backport didn't adapt to the earlier function prototype taking more
    (unused here) arguments.
    
    Fixes: c5215044578e ("xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_arm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index d21f614ed7..ba548befdd 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -132,14 +132,14 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+    unsigned long shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
 
     int r = xc_shadow_control(ctx->xch, domid,
                               XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                              &shadow_mb, 0);
+                              NULL, 0, &shadow_mb, 0, NULL);
     if (r) {
         LOGED(ERROR, domid,
-              "Failed to set %u MiB shadow allocation", shadow_mb);
+              "Failed to set %lu MiB shadow allocation", shadow_mb);
         return ERROR_FAIL;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Wed Oct 12 15:44:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Oct 2022 15:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.421276.666504 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiduD-0007vE-Ma; Wed, 12 Oct 2022 15:44:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 421276.666504; Wed, 12 Oct 2022 15:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiduD-0007v7-Jf; Wed, 12 Oct 2022 15:44:25 +0000
Received: by outflank-mailman (input) for mailman id 421276;
 Wed, 12 Oct 2022 15:44:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiduC-0007uv-Gh
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiduC-00006S-Fs
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiduC-0008UU-Eu
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=fBJptboNAZlOcr+gccq5DJ9jcTFUKEAWRkm6/tSHqko=; b=Y96DEtMrwKC6TNpgnCSCGUtZDx
	EqNQ1wwAwdOthng2PDfEUQsN1dGmJW0rhQgr5qiYwkFXvl0/GYC8wIEPppC2rOLF3zHxKUApDbM/+
	dph96m65zclvpaQ9Q18wsnOoyfsfUCnX25wThNxSE72iofcrouiZ7xwlY/hvUX6k5Vl4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] libxl/Arm: correct xc_shadow_control() invocation to fix build
Message-Id: <E1oiduC-0008UU-Eu@xenbits.xenproject.org>
Date: Wed, 12 Oct 2022 15:44:24 +0000

commit 016de62747b26ead5a5c763b640fe8e205cd182b
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:36:03 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:36:03 2022 +0200

    libxl/Arm: correct xc_shadow_control() invocation to fix build
    
    The backport didn't adapt to the earlier function prototype taking more
    (unused here) arguments.
    
    Fixes: c5215044578e ("xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libxl/libxl_arm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 025df1bfd0..79cfb9cd29 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -131,14 +131,14 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+    unsigned long shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
 
     int r = xc_shadow_control(ctx->xch, domid,
                               XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                              &shadow_mb, 0);
+                              NULL, 0, &shadow_mb, 0, NULL);
     if (r) {
         LOGED(ERROR, domid,
-              "Failed to set %u MiB shadow allocation", shadow_mb);
+              "Failed to set %lu MiB shadow allocation", shadow_mb);
         return ERROR_FAIL;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Wed Oct 12 15:44:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Oct 2022 15:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.421277.666508 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiduN-0007xu-Nw; Wed, 12 Oct 2022 15:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 421277.666508; Wed, 12 Oct 2022 15:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oiduN-0007xm-LK; Wed, 12 Oct 2022 15:44:35 +0000
Received: by outflank-mailman (input) for mailman id 421277;
 Wed, 12 Oct 2022 15:44:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiduM-0007xc-Oo
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiduM-00006v-O1
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oiduM-0008VI-N4
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 15:44:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=6ao5x83Tr9WdARrRVGZZtrSqzGwun8rkUBvEZCwu19w=; b=Lg+XyjApbQr4lFpTAWM1v9dZxx
	sLvM9CF1RKxSyyXKjiLG9ecVBuPH0BYPlql+9cts8uAiBa5Oo2GqPIPahU5wstXhSTAMUteW14jsW
	O5biy5vlsTCJhxhrmUY8ZPiBNaDGYyCepJNKahZm5FHHJpAYA3GGgsK3UNsSgIslUMPI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] libxl/Arm: correct xc_shadow_control() invocation to fix build
Message-Id: <E1oiduM-0008VI-N4@xenbits.xenproject.org>
Date: Wed, 12 Oct 2022 15:44:34 +0000

commit 0be63c2615b268001f7cc9b72ce25eed952737dc
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:36:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:36:48 2022 +0200

    libxl/Arm: correct xc_shadow_control() invocation to fix build
    
    The backport didn't adapt to the earlier function prototype taking more
    (unused here) arguments.
    
    Fixes: c5215044578e ("xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libxl/libxl_arm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 025df1bfd0..79cfb9cd29 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -131,14 +131,14 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+    unsigned long shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
 
     int r = xc_shadow_control(ctx->xch, domid,
                               XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                              &shadow_mb, 0);
+                              NULL, 0, &shadow_mb, 0, NULL);
     if (r) {
         LOGED(ERROR, domid,
-              "Failed to set %u MiB shadow allocation", shadow_mb);
+              "Failed to set %lu MiB shadow allocation", shadow_mb);
         return ERROR_FAIL;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Wed Oct 12 16:00:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Oct 2022 16:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.421292.666523 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oie9O-0002P8-3e; Wed, 12 Oct 2022 16:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 421292.666523; Wed, 12 Oct 2022 16:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oie9O-0002Ol-0k; Wed, 12 Oct 2022 16:00:06 +0000
Received: by outflank-mailman (input) for mailman id 421292;
 Wed, 12 Oct 2022 16:00:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oie9N-0002FA-18
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 16:00:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oie9M-0000sa-Uy
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 16:00:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oie9M-0000pH-Tu
 for xen-changelog@lists.xenproject.org; Wed, 12 Oct 2022 16:00:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=PazKnGVM9eWvrE/hntpXLcf5irq7PJaBD8QDWH09kgc=; b=RYUi05CzXnM4YjY+oLxkuFAcdk
	j2NpXpwL0Qsi9lFjIwj/1pqPILb9DA2Qs+beDwn1b6MpCQKGuCkzzRLp+Yks2wpOFx33W8RPbOsIA
	MBNOPqdnTJ1yEc5GY2QPCZH1pjFkAkBnp2IDQQpBzKh58dfUX74SBQmUSBi1I/kX1sLA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] VMX: correct error handling in vmx_create_vmcs()
Message-Id: <E1oie9M-0000pH-Tu@xenbits.xenproject.org>
Date: Wed, 12 Oct 2022 16:00:04 +0000

commit 448d28309f1a966bdc850aff1a637e0b79a03e43
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:57:56 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:57:56 2022 +0200

    VMX: correct error handling in vmx_create_vmcs()
    
    With the addition of vmx_add_msr() calls to construct_vmcs() there are
    now cases where simply freeing the VMCS isn't enough: The MSR bitmap
    page as well as one of the MSR area ones (if it's the 2nd vmx_add_msr()
    which fails) may also need freeing. Switch to using vmx_destroy_vmcs()
    instead.
    
    Fixes: 3bd36952dab6 ("x86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests")
    Fixes: 53a570b28569 ("x86/spec-ctrl: Support IBPB-on-entry")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 4f12fa06ac..a1aca1ec04 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1821,7 +1821,7 @@ int vmx_create_vmcs(struct vcpu *v)
 
     if ( (rc = construct_vmcs(v)) != 0 )
     {
-        vmx_free_vmcs(vmx->vmcs_pa);
+        vmx_destroy_vmcs(v);
         return rc;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 13 10:55:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Oct 2022 10:55:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.421853.667514 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oivrn-0001Yh-2k; Thu, 13 Oct 2022 10:55:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 421853.667514; Thu, 13 Oct 2022 10:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oivrn-0001YZ-0C; Thu, 13 Oct 2022 10:55:07 +0000
Received: by outflank-mailman (input) for mailman id 421853;
 Thu, 13 Oct 2022 10:55:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oivrl-0001Wz-5d
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 10:55:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oivrl-00058P-1Z
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 10:55:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oivrl-0000vK-0U
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 10:55:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Jn40ARfnj7MPU6vTixdj4nXifmkTIVmCXgqZKt3az+c=; b=XKIsLnx9/Lfoxy7/+OW8hankg7
	i07n6YaQcdtUXuPjs6GtnEkGfPImoMlUkpA9zqxZMTIOmCo8UC8ZwnS/O044FgOyjDEplp8hBB+ka
	47jb8IdeuF1jrXKdfSTMCzFZWy6h430C/cmrNJKqo6CmoBBbSyBNQD8Yuz1BTxNp9ABU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/ocaml/xc: Fix code legibility in stub_xc_domain_create()
Message-Id: <E1oivrl-0000vK-0U@xenbits.xenproject.org>
Date: Thu, 13 Oct 2022 10:55:05 +0000

commit 1f232670f806d20471fc4205069448292e2df2df
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Wed Oct 12 11:02:08 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 13 11:41:48 2022 +0100

    tools/ocaml/xc: Fix code legibility in stub_xc_domain_create()
    
    Reposition the defines to match the outer style and to make the logic
    half-legible.
    
    No functional change.
    
    Fixes: 0570d7f276dd ("x86/msr: introduce an option for compatible MSR behavior selection")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 19335bdf45..fe9c00ce00 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -232,22 +232,20 @@ CAMLprim value stub_xc_domain_create(value xch, value wanted_domid, value config
 
         /* Mnemonics for the named fields inside xen_x86_arch_domainconfig */
 #define VAL_EMUL_FLAGS          Field(arch_domconfig, 0)
+#define VAL_MISC_FLAGS          Field(arch_domconfig, 1)
 
 		cfg.arch.emulation_flags = ocaml_list_to_c_bitmap
 			/* ! x86_arch_emulation_flags X86_EMU_ none */
 			/* ! XEN_X86_EMU_ XEN_X86_EMU_ALL all */
 			(VAL_EMUL_FLAGS);
 
-#undef VAL_EMUL_FLAGS
-
-#define VAL_MISC_FLAGS          Field(arch_domconfig, 1)
-
 		cfg.arch.misc_flags = ocaml_list_to_c_bitmap
 			/* ! x86_arch_misc_flags X86_ none */
 			/* ! XEN_X86_ XEN_X86_MISC_FLAGS_MAX max */
 			(VAL_MISC_FLAGS);
 
 #undef VAL_MISC_FLAGS
+#undef VAL_EMUL_FLAGS
 
 #else
 		caml_failwith("Unhandled: x86");
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 13 10:55:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Oct 2022 10:55:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.421854.667518 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oivrx-0001ac-4F; Thu, 13 Oct 2022 10:55:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 421854.667518; Thu, 13 Oct 2022 10:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oivrx-0001aU-1e; Thu, 13 Oct 2022 10:55:17 +0000
Received: by outflank-mailman (input) for mailman id 421854;
 Thu, 13 Oct 2022 10:55:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oivrv-0001aF-6t
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 10:55:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oivrv-00058U-62
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 10:55:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oivrv-0000vr-3g
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 10:55:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RaOVaYhoBdoncZwPT3EhQe/P0NIz8ypDPQI0jHe/jfA=; b=xzmV0cyWbpyn2/Q02w7DCWWA/n
	Z2grareNFEQgzMyi1JwgMEBUWMpK8bZJ6COF0UcEZCrKNF3hN+ReUL9UStW3PgNTMPl5jghO8tWrU
	u2q/rIPLhHjaKPH9/zyik/2+JJZMxdmHyj/HLYIJMBZSYp4ulRDq6U4tbKf8eas1T4JE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/ocaml/xc: Address ABI issues with physinfo arch flags
Message-Id: <E1oivrv-0000vr-3g@xenbits.xenproject.org>
Date: Thu, 13 Oct 2022 10:55:15 +0000

commit 0823d57d71c7023bea94d483f69f7b5e62820102
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Mon Jul 25 18:36:29 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 13 11:45:19 2022 +0100

    tools/ocaml/xc: Address ABI issues with physinfo arch flags
    
    The current bindings function, but the preexisting
    
      type physinfo_arch_cap_flag =
             | X86 of x86_physinfo_arch_cap_flag
    
    is a special case in the Ocaml type system with an unusual indirection, and
    will break when a second option, e.g. `| ARM of ...` is added.
    
    Also, the position the list is logically wrong.  Currently, the types express
    a list of elements which might be an x86 flag or an arm flag (and can
    intermix), whereas what we actually want is either a list of x86 flags, or a
    list of ARM flags (that cannot intermix).
    
    Rework the Ocaml types to avoid the ABI special case and move the list
    primitive, and adjust the C bindings to match.
    
    Fixes: 2ce11ce249a3 ("x86/HVM: allow per-domain usage of hardware virtualized APIC")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      | 10 ++++++----
 tools/ocaml/libs/xc/xenctrl.mli     | 11 +++++++----
 tools/ocaml/libs/xc/xenctrl_stubs.c | 21 +++++++++++----------
 3 files changed, 24 insertions(+), 18 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 0c71e5eef3..28ed642231 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -130,13 +130,15 @@ type physinfo_cap_flag =
 	| CAP_Gnttab_v1
 	| CAP_Gnttab_v2
 
+type arm_physinfo_cap_flag
 
-type x86_physinfo_arch_cap_flag =
+type x86_physinfo_cap_flag =
 	| CAP_X86_ASSISTED_XAPIC
 	| CAP_X86_ASSISTED_X2APIC
 
-type physinfo_arch_cap_flag =
-	| X86 of x86_physinfo_arch_cap_flag
+type arch_physinfo_cap_flags =
+	| ARM of arm_physinfo_cap_flag list
+	| X86 of x86_physinfo_cap_flag list
 
 type physinfo =
 {
@@ -151,7 +153,7 @@ type physinfo =
 	(* XXX hw_cap *)
 	capabilities     : physinfo_cap_flag list;
 	max_nr_cpus      : int;
-	arch_capabilities : physinfo_arch_cap_flag list;
+	arch_capabilities : arch_physinfo_cap_flags;
 }
 
 type version =
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index a8458e19ca..c2076d60c9 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -115,12 +115,15 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type x86_physinfo_arch_cap_flag =
+type arm_physinfo_cap_flag
+
+type x86_physinfo_cap_flag =
   | CAP_X86_ASSISTED_XAPIC
   | CAP_X86_ASSISTED_X2APIC
 
-type physinfo_arch_cap_flag =
-  | X86 of x86_physinfo_arch_cap_flag
+type arch_physinfo_cap_flags =
+  | ARM of arm_physinfo_cap_flag list
+  | X86 of x86_physinfo_cap_flag list
 
 type physinfo = {
   threads_per_core : int;
@@ -133,7 +136,7 @@ type physinfo = {
   scrub_pages      : nativeint;
   capabilities     : physinfo_cap_flag list;
   max_nr_cpus      : int; (** compile-time max possible number of nr_cpus *)
-  arch_capabilities : physinfo_arch_cap_flag list;
+  arch_capabilities : arch_physinfo_cap_flags;
 }
 type version = { major : int; minor : int; extra : string; }
 type compile_info = {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index fe9c00ce00..a8789d19be 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -716,9 +716,9 @@ CAMLprim value stub_xc_send_debug_keys(value xch, value keys)
 CAMLprim value stub_xc_physinfo(value xch)
 {
 	CAMLparam1(xch);
-	CAMLlocal4(physinfo, cap_list, x86_arch_cap_list, arch_cap_list);
+	CAMLlocal4(physinfo, cap_list, arch_cap_flags, arch_cap_list);
 	xc_physinfo_t c_physinfo;
-	int r;
+	int r, arch_cap_flags_tag;
 
 	caml_enter_blocking_section();
 	r = xc_physinfo(_H(xch), &c_physinfo);
@@ -748,18 +748,19 @@ CAMLprim value stub_xc_physinfo(value xch)
 	Store_field(physinfo, 9, Val_int(c_physinfo.max_cpu_id + 1));
 
 #if defined(__i386__) || defined(__x86_64__)
-	x86_arch_cap_list = c_bitmap_to_ocaml_list
-		/* ! x86_physinfo_arch_cap_flag CAP_X86_ none */
+	arch_cap_list = c_bitmap_to_ocaml_list
+		/* ! x86_physinfo_cap_flag CAP_X86_ none */
 		/* ! XEN_SYSCTL_PHYSCAP_X86_ XEN_SYSCTL_PHYSCAP_X86_MAX max */
 		(c_physinfo.arch_capabilities);
-	/*
-	 * arch_capabilities: physinfo_arch_cap_flag list;
-	 */
-	arch_cap_list = x86_arch_cap_list;
+
+	arch_cap_flags_tag = 1; /* tag x86 */
 #else
-	arch_cap_list = Val_emptylist;
+	caml_failwith("Unhandled architecture");
 #endif
-	Store_field(physinfo, 10, arch_cap_list);
+
+	arch_cap_flags = caml_alloc_small(1, arch_cap_flags_tag);
+	Store_field(arch_cap_flags, 0, arch_cap_list);
+	Store_field(physinfo, 10, arch_cap_flags);
 
 	CAMLreturn(physinfo);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 13 16:00:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Oct 2022 16:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422307.668232 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0cv-0008Dp-CF; Thu, 13 Oct 2022 16:00:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422307.668232; Thu, 13 Oct 2022 16:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0cv-0008DS-8j; Thu, 13 Oct 2022 16:00:05 +0000
Received: by outflank-mailman (input) for mailman id 422307;
 Thu, 13 Oct 2022 16:00:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0cu-00083A-GZ
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0cu-0002zF-Ab
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0cu-0007ii-9W
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rJYWwp236/I63BI6r3c29VIi/uPa3/qPnd/vnH6ihwA=; b=j2Pr6LB525fbYYVLOGH2gFHNFW
	qk9Xlj6uDAEWRsPSk8yZSBiZIkY2z49eeo9oH6luTALcldTSCiak3mKkpokcx8BLMqrhGGemTAAqO
	QTXL6R16a77oqIbCKPn6wz10vp18nIC+TNnWVnGKU/bRrQDjx2UhIigsSIRrBoP0Wj5s=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/mwait-idle: add 'preferred-cstates' command line option
Message-Id: <E1oj0cu-0007ii-9W@xenbits.xenproject.org>
Date: Thu, 13 Oct 2022 16:00:04 +0000

commit 9fc9a5c21612993fbd2bb1acdd68d9181ab6f7d2
Author:     Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
AuthorDate: Thu Oct 13 17:52:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:52:36 2022 +0200

    x86/mwait-idle: add 'preferred-cstates' command line option
    
    On Sapphire Rapids Xeon (SPR) the C1 and C1E states are basically mutually
    exclusive - only one of them can be enabled. By default, 'intel_idle' driver
    enables C1 and disables C1E. However, some users prefer to use C1E instead of
    C1, because it saves more energy.
    
    This patch adds a new module parameter ('preferred_cstates') for enabling C1E
    and disabling C1. Here is the idea behind it.
    
    1. This option has effect only for "mutually exclusive" C-states like C1 and
       C1E on SPR.
    2. It does not have any effect on independent C-states, which do not require
       other C-states to be disabled (most states on most platforms as of today).
    3. For mutually exclusive C-states, the 'intel_idle' driver always has a
       reasonable default, such as enabling C1 on SPR by default. On other
       platforms, the default may be different.
    4. Users can override the default using the 'preferred_cstates' parameter.
    5. The parameter accepts the preferred C-states bit-mask, similarly to the
       existing 'states_off' parameter.
    6. This parameter is not limited to C1/C1E, and leaves room for supporting
       other mutually exclusive C-states, if they come in the future.
    
    Today 'intel_idle' can only be compiled-in, which means that on SPR, in order
    to disable C1 and enable C1E, users should boot with the following kernel
    argument: intel_idle.preferred_cstates=4
    
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git da0e58c038e6
    
    Enable C1E (if requested) not only on the BSP's socket / package. Alter
    command line option to fit our model, and extend it to also accept
    string form arguments.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 docs/misc/xen-command-line.pandoc |   6 ++
 xen/arch/x86/cpu/mwait-idle.c     | 132 ++++++++++++++++++++++++++++++++------
 2 files changed, 119 insertions(+), 19 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 68389843b2..0fbdcb574f 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1926,6 +1926,12 @@ paging controls access to usermode addresses.
 ### ple_window (Intel)
 > `= <integer>`
 
+### preferred-cstates (x86)
+> `= ( <integer> | List of ( C1 | C1E | C2 | ... )`
+
+This is a mask of C-states which are to be used preferably.  This option is
+applicable only on hardware were certain C-states are exclusive of one another.
+
 ### psr (Intel)
 > `= List of ( cmt:<boolean> | rmid_max:<integer> | cat:<boolean> | cos_max:<integer> | cdp:<boolean> )`
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 5d77672f6b..cc62ddf743 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -82,10 +82,29 @@ boolean_param("mwait-idle", opt_mwait_idle);
 
 static unsigned int mwait_substates;
 
+/*
+ * Some platforms come with mutually exclusive C-states, so that if one is
+ * enabled, the other C-states must not be used. Example: C1 and C1E on
+ * Sapphire Rapids platform. This parameter allows for selecting the
+ * preferred C-states among the groups of mutually exclusive C-states - the
+ * selected C-states will be registered, the other C-states from the mutually
+ * exclusive group won't be registered. If the platform has no mutually
+ * exclusive C-states, this parameter has no effect.
+ */
+static unsigned int __ro_after_init preferred_states_mask;
+static char __initdata preferred_states[64];
+string_param("preferred-cstates", preferred_states);
+
 #define LAPIC_TIMER_ALWAYS_RELIABLE 0xFFFFFFFF
 /* Reliable LAPIC Timer States, bit 1 for C1 etc. Default to only C1. */
 static unsigned int lapic_timer_reliable_states = (1 << 1);
 
+enum c1e_promotion {
+	C1E_PROMOTION_PRESERVE,
+	C1E_PROMOTION_ENABLE,
+	C1E_PROMOTION_DISABLE
+};
+
 struct idle_cpu {
 	const struct cpuidle_state *state_table;
 
@@ -95,7 +114,7 @@ struct idle_cpu {
 	 */
 	unsigned long auto_demotion_disable_flags;
 	bool byt_auto_demotion_disable_flag;
-	bool disable_promotion_to_c1e;
+	enum c1e_promotion c1e_promotion;
 };
 
 static const struct idle_cpu *icpu;
@@ -924,6 +943,15 @@ static void cf_check byt_auto_demotion_disable(void *dummy)
 	wrmsrl(MSR_MC6_DEMOTION_POLICY_CONFIG, 0);
 }
 
+static void cf_check c1e_promotion_enable(void *dummy)
+{
+	uint64_t msr_bits;
+
+	rdmsrl(MSR_IA32_POWER_CTL, msr_bits);
+	msr_bits |= 0x2;
+	wrmsrl(MSR_IA32_POWER_CTL, msr_bits);
+}
+
 static void cf_check c1e_promotion_disable(void *dummy)
 {
 	u64 msr_bits;
@@ -936,7 +964,7 @@ static void cf_check c1e_promotion_disable(void *dummy)
 static const struct idle_cpu idle_cpu_nehalem = {
 	.state_table = nehalem_cstates,
 	.auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_atom = {
@@ -954,64 +982,64 @@ static const struct idle_cpu idle_cpu_lincroft = {
 
 static const struct idle_cpu idle_cpu_snb = {
 	.state_table = snb_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_byt = {
 	.state_table = byt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 	.byt_auto_demotion_disable_flag = true,
 };
 
 static const struct idle_cpu idle_cpu_cht = {
 	.state_table = cht_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 	.byt_auto_demotion_disable_flag = true,
 };
 
 static const struct idle_cpu idle_cpu_ivb = {
 	.state_table = ivb_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_ivt = {
 	.state_table = ivt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_hsw = {
 	.state_table = hsw_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_bdw = {
 	.state_table = bdw_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_skl = {
 	.state_table = skl_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_skx = {
 	.state_table = skx_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_icx = {
-       .state_table = icx_cstates,
-       .disable_promotion_to_c1e = true,
+	.state_table = icx_cstates,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static struct idle_cpu __read_mostly idle_cpu_spr = {
 	.state_table = spr_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_avn = {
 	.state_table = avn_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_knl = {
@@ -1020,17 +1048,17 @@ static const struct idle_cpu idle_cpu_knl = {
 
 static const struct idle_cpu idle_cpu_bxt = {
 	.state_table = bxt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_dnv = {
 	.state_table = dnv_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_snr = {
 	.state_table = snr_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 #define ICPU(model, cpu) \
@@ -1240,6 +1268,25 @@ static void __init skx_idle_state_table_update(void)
 	}
 }
 
+/*
+ * spr_idle_state_table_update - Adjust Sapphire Rapids idle states table.
+ */
+static void __init spr_idle_state_table_update(void)
+{
+	/* Check if user prefers C1E over C1. */
+	if (preferred_states_mask & BIT(2, U)) {
+		if (preferred_states_mask & BIT(1, U))
+			/* Both can't be enabled, stick to the defaults. */
+			return;
+
+		spr_cstates[0].flags |= CPUIDLE_FLAG_DISABLED;
+		spr_cstates[1].flags &= ~CPUIDLE_FLAG_DISABLED;
+
+		/* Request enabling C1E using the "C1E promotion" bit. */
+		idle_cpu_spr.c1e_promotion = C1E_PROMOTION_ENABLE;
+	}
+}
+
 /*
  * mwait_idle_state_table_update()
  *
@@ -1261,6 +1308,9 @@ static void __init mwait_idle_state_table_update(void)
 	case INTEL_FAM6_SKYLAKE_X:
 		skx_idle_state_table_update();
 		break;
+	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+		spr_idle_state_table_update();
+		break;
 	}
 }
 
@@ -1268,6 +1318,7 @@ static int __init mwait_idle_probe(void)
 {
 	unsigned int eax, ebx, ecx;
 	const struct x86_cpu_id *id = x86_match_cpu(intel_idle_ids);
+	const char *str;
 
 	if (!id) {
 		pr_debug(PREFIX "does not run on family %d model %d\n",
@@ -1309,6 +1360,39 @@ static int __init mwait_idle_probe(void)
 	pr_debug(PREFIX "lapic_timer_reliable_states %#x\n",
 		 lapic_timer_reliable_states);
 
+	str = preferred_states;
+	if (isdigit(str[0]))
+		preferred_states_mask = simple_strtoul(str, &str, 0);
+	else if (str[0])
+	{
+		const char *ss;
+
+		do {
+			const struct cpuidle_state *state = icpu->state_table;
+			unsigned int bit = 1;
+
+			ss = strchr(str, ',');
+			if (!ss)
+				ss = strchr(str, '\0');
+
+			for (; state->name[0]; ++state) {
+				bit <<= 1;
+				if (!cmdline_strcmp(str, state->name)) {
+					preferred_states_mask |= bit;
+					break;
+				}
+			}
+			if (!state->name[0])
+				break;
+
+			str = ss + 1;
+		} while (*ss);
+
+		str -= str == ss + 1;
+	}
+	if (str[0])
+		printk("unrecognized \"preferred-cstates=%s\"\n", str);
+
 	mwait_idle_state_table_update();
 
 	return 0;
@@ -1400,8 +1484,18 @@ static int cf_check mwait_idle_cpu_init(
 	if (icpu->byt_auto_demotion_disable_flag)
 		on_selected_cpus(cpumask_of(cpu), byt_auto_demotion_disable, NULL, 1);
 
-	if (icpu->disable_promotion_to_c1e)
+	switch (icpu->c1e_promotion) {
+	case C1E_PROMOTION_DISABLE:
 		on_selected_cpus(cpumask_of(cpu), c1e_promotion_disable, NULL, 1);
+		break;
+
+	case C1E_PROMOTION_ENABLE:
+		on_selected_cpus(cpumask_of(cpu), c1e_promotion_enable, NULL, 1);
+		break;
+
+	case C1E_PROMOTION_PRESERVE:
+		break;
+	}
 
 	return NOTIFY_DONE;
 }
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 13 16:00:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Oct 2022 16:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422308.668236 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0d5-0008Oe-DC; Thu, 13 Oct 2022 16:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422308.668236; Thu, 13 Oct 2022 16:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0d5-0008OW-AV; Thu, 13 Oct 2022 16:00:15 +0000
Received: by outflank-mailman (input) for mailman id 422308;
 Thu, 13 Oct 2022 16:00:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0d4-0008OM-FP
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0d4-00034F-Dj
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0d4-0007kF-Cm
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=lwVVcK7zWLnEQE87nMqVnC5jKaMctc3omXxImoYcTRY=; b=QLFL/LS/UUxJbDgPF6r7CtgtRa
	SjaScrQV13AfjBbsxoZ8+t/DybEqG2ZAl4JueQRsW86FfBT/MhbC299RbhmZ8TDeiQ4BIKAK1y/RK
	K4U7tWID+YVa7Sj6Miv+nXeVpqrL7oXsyxsJxsoTJuRQZfE0lz0bXBUtKpd/06OUfBIo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/mwait-idle: add core C6 optimization for SPR
Message-Id: <E1oj0d4-0007kF-Cm@xenbits.xenproject.org>
Date: Thu, 13 Oct 2022 16:00:14 +0000

commit 13ecd1c216433125836c0516219a0854640eeeed
Author:     Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
AuthorDate: Thu Oct 13 17:53:26 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:53:26 2022 +0200

    x86/mwait-idle: add core C6 optimization for SPR
    
    From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    
    Add a Sapphire Rapids Xeon C6 optimization, similar to what we have for Sky Lake
    Xeon: if package C6 is disabled, adjust C6 exit latency and target residency to
    match core C6 values, instead of using the default package C6 values.
    
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 3a9cf77b60dc
    
    Make sure a contradictory "preferred-cstates" wouldn't cause bypassing
    of the added logic.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index cc62ddf743..17d756881a 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -1273,18 +1273,31 @@ static void __init skx_idle_state_table_update(void)
  */
 static void __init spr_idle_state_table_update(void)
 {
-	/* Check if user prefers C1E over C1. */
-	if (preferred_states_mask & BIT(2, U)) {
-		if (preferred_states_mask & BIT(1, U))
-			/* Both can't be enabled, stick to the defaults. */
-			return;
+	uint64_t msr;
 
+	/* Check if user prefers C1E over C1. */
+	if ((preferred_states_mask & BIT(2, U)) &&
+	    !(preferred_states_mask & BIT(1, U))) {
+		/* Disable C1 and enable C1E. */
 		spr_cstates[0].flags |= CPUIDLE_FLAG_DISABLED;
 		spr_cstates[1].flags &= ~CPUIDLE_FLAG_DISABLED;
 
 		/* Request enabling C1E using the "C1E promotion" bit. */
 		idle_cpu_spr.c1e_promotion = C1E_PROMOTION_ENABLE;
 	}
+
+	/*
+	 * By default, the C6 state assumes the worst-case scenario of package
+	 * C6. However, if PC6 is disabled, we update the numbers to match
+	 * core C6.
+	 */
+	rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr);
+
+	/* Limit value 2 and above allow for PC6. */
+	if ((msr & 0x7) < 2) {
+		spr_cstates[2].exit_latency = 190;
+		spr_cstates[2].target_residency = 600;
+	}
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 13 16:00:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Oct 2022 16:00:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422309.668240 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0dF-00007S-Es; Thu, 13 Oct 2022 16:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422309.668240; Thu, 13 Oct 2022 16:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0dF-00007K-C0; Thu, 13 Oct 2022 16:00:25 +0000
Received: by outflank-mailman (input) for mailman id 422309;
 Thu, 13 Oct 2022 16:00:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dE-00006v-Hk
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dE-00034W-Gx
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dE-0007lE-Fx
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=d3YKEf8U/alKXbCCyGMi+32MRiNQhuAtwAayUgO8wdY=; b=MbaqsJIUtVCdLHUKvVXtadE/xc
	W3Afd9QtJmaH5NiAlUdXFt+hYrqEED5MofWQiHtZfrDq0A8n3F28aQSSn7W94J0BYmRqSa8GRlunv
	vZUoeWhC2NlVkChQWWOQ41P27NEUoqZDkc5T2STFFv+MzFHCSnUKTMlC92v7X9u2vCOs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/mwait-idle: add AlderLake support
Message-Id: <E1oj0dE-0007lE-Fx@xenbits.xenproject.org>
Date: Thu, 13 Oct 2022 16:00:24 +0000

commit 0fa9c3ef1e9196e8cd38c1532d29cf670dc21bcb
Author:     Zhang Rui <rui.zhang@intel.com>
AuthorDate: Thu Oct 13 17:54:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:54:23 2022 +0200

    x86/mwait-idle: add AlderLake support
    
    Similar to SPR, the C1 and C1E states on ADL are mutually exclusive.
    Only one of them can be enabled at a time.
    
    But contrast to SPR, which usually has a strong latency requirement
    as a Xeon processor, C1E is preferred on ADL for better energy
    efficiency.
    
    Add custom C-state tables for ADL with both C1 and C1E, and
    
     1. Enable the "C1E promotion" bit in MSR_IA32_POWER_CTL and mark C1
        with the CPUIDLE_FLAG_UNUSABLE flag, so C1 is not available by
        default.
    
     2. Add support for the "preferred_cstates" module parameter, so that
        users can choose to use C1 instead of C1E by booting with
        "intel_idle.preferred_cstates=2".
    
    Separate custom C-state tables are introduced for the ADL mobile and
    desktop processors, because of the exit latency differences between
    these two variants, especially with respect to PC10.
    
    Signed-off-by: Zhang Rui <rui.zhang@intel.com>
    [ rjw: Changelog edits, code rearrangement ]
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git d1cf8bbfed1e
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 116 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 116 insertions(+)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 17d756881a..86c47a04c7 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -605,6 +605,84 @@ static const struct cpuidle_state icx_cstates[] = {
        {}
 };
 
+/*
+ * On AlderLake C1 has to be disabled if C1E is enabled, and vice versa.
+ * C1E is enabled only if "C1E promotion" bit is set in MSR_IA32_POWER_CTL.
+ * But in this case there is effectively no C1, because C1 requests are
+ * promoted to C1E. If the "C1E promotion" bit is cleared, then both C1
+ * and C1E requests end up with C1, so there is effectively no C1E.
+ *
+ * By default we enable C1E and disable C1 by marking it with
+ * 'CPUIDLE_FLAG_DISABLED'.
+ */
+static struct cpuidle_state __read_mostly adl_cstates[] = {
+	{
+		.name = "C1",
+		.flags = MWAIT2flg(0x00) | CPUIDLE_FLAG_DISABLED,
+		.exit_latency = 1,
+		.target_residency = 1,
+	},
+	{
+		.name = "C1E",
+		.flags = MWAIT2flg(0x01),
+		.exit_latency = 2,
+		.target_residency = 4,
+	},
+	{
+		.name = "C6",
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 220,
+		.target_residency = 600,
+	},
+	{
+		.name = "C8",
+		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 280,
+		.target_residency = 800,
+	},
+	{
+		.name = "C10",
+		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 680,
+		.target_residency = 2000,
+	},
+	{}
+};
+
+static struct cpuidle_state __read_mostly adl_l_cstates[] = {
+	{
+		.name = "C1",
+		.flags = MWAIT2flg(0x00) | CPUIDLE_FLAG_DISABLED,
+		.exit_latency = 1,
+		.target_residency = 1,
+	},
+	{
+		.name = "C1E",
+		.flags = MWAIT2flg(0x01),
+		.exit_latency = 2,
+		.target_residency = 4,
+	},
+	{
+		.name = "C6",
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 170,
+		.target_residency = 500,
+	},
+	{
+		.name = "C8",
+		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 200,
+		.target_residency = 600,
+	},
+	{
+		.name = "C10",
+		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 230,
+		.target_residency = 700,
+	},
+	{}
+};
+
 /*
  * On Sapphire Rapids Xeon C1 has to be disabled if C1E is enabled, and vice
  * versa. On SPR C1E is enabled only if "C1E promotion" bit is set in
@@ -1032,6 +1110,14 @@ static const struct idle_cpu idle_cpu_icx = {
 	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
+static struct idle_cpu __read_mostly idle_cpu_adl = {
+	.state_table = adl_cstates,
+};
+
+static struct idle_cpu __read_mostly idle_cpu_adl_l = {
+	.state_table = adl_l_cstates,
+};
+
 static struct idle_cpu __read_mostly idle_cpu_spr = {
 	.state_table = spr_cstates,
 	.c1e_promotion = C1E_PROMOTION_DISABLE,
@@ -1099,6 +1185,8 @@ static const struct x86_cpu_id intel_idle_ids[] __initconstrel = {
 	ICPU(SKYLAKE_X,			skx),
 	ICPU(ICELAKE_X,			icx),
 	ICPU(ICELAKE_D,			icx),
+	ICPU(ALDERLAKE,			adl),
+	ICPU(ALDERLAKE_L,		adl_l),
 	ICPU(SAPPHIRERAPIDS_X,		spr),
 	ICPU(XEON_PHI_KNL,		knl),
 	ICPU(XEON_PHI_KNM,		knl),
@@ -1268,6 +1356,30 @@ static void __init skx_idle_state_table_update(void)
 	}
 }
 
+/*
+ * adl_idle_state_table_update - Adjust AlderLake idle states table.
+ */
+static void __init adl_idle_state_table_update(void)
+{
+	/* Check if user prefers C1 over C1E. */
+	if ((preferred_states_mask & BIT(1, U)) &&
+	    !(preferred_states_mask & BIT(2, U))) {
+		adl_cstates[0].flags &= ~CPUIDLE_FLAG_DISABLED;
+		adl_cstates[1].flags |= CPUIDLE_FLAG_DISABLED;
+		adl_l_cstates[0].flags &= ~CPUIDLE_FLAG_DISABLED;
+		adl_l_cstates[1].flags |= CPUIDLE_FLAG_DISABLED;
+
+		/* Disable C1E by clearing the "C1E promotion" bit. */
+		idle_cpu_adl.c1e_promotion = C1E_PROMOTION_DISABLE;
+		idle_cpu_adl_l.c1e_promotion = C1E_PROMOTION_DISABLE;
+		return;
+	}
+
+	/* Make sure C1E is enabled by default */
+	idle_cpu_adl.c1e_promotion = C1E_PROMOTION_ENABLE;
+	idle_cpu_adl_l.c1e_promotion = C1E_PROMOTION_ENABLE;
+}
+
 /*
  * spr_idle_state_table_update - Adjust Sapphire Rapids idle states table.
  */
@@ -1324,6 +1436,10 @@ static void __init mwait_idle_state_table_update(void)
 	case INTEL_FAM6_SAPPHIRERAPIDS_X:
 		spr_idle_state_table_update();
 		break;
+	case INTEL_FAM6_ALDERLAKE:
+	case INTEL_FAM6_ALDERLAKE_L:
+		adl_idle_state_table_update();
+		break;
 	}
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 13 16:00:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Oct 2022 16:00:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422310.668244 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0dP-0000An-GP; Thu, 13 Oct 2022 16:00:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422310.668244; Thu, 13 Oct 2022 16:00:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0dP-0000Aa-DX; Thu, 13 Oct 2022 16:00:35 +0000
Received: by outflank-mailman (input) for mailman id 422310;
 Thu, 13 Oct 2022 16:00:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dO-0000AQ-Ks
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dO-00034g-K6
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dO-0007mP-JD
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=7It64DYVOSkK1/rop4jgGqaNs/W3vsFqlcK3cqkin7w=; b=WPIhMKSCwLxi+redCBP+t/BM8f
	+/TREloTEWsc5vWKAFHgOruxo57yM0cUmYpUJLrP544jEAVuxC2TCOAfvnY87txcUJgIS7PQEF9Y9
	0ZrbTKmV+egKBJ5co5K7jf+BhzfZC5ytAqoSmIjytFso+FyU3i6VcYgvC6BQt+2dzot4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/mwait-idle: disable IBRS during long idle
Message-Id: <E1oj0dO-0007mP-JD@xenbits.xenproject.org>
Date: Thu, 13 Oct 2022 16:00:34 +0000

commit 08acdf9a26153130d7fa47925ceb53c39fcb87da
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu Oct 13 17:55:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:55:22 2022 +0200

    x86/mwait-idle: disable IBRS during long idle
    
    Having IBRS enabled while the SMT sibling is idle unnecessarily slows
    down the running sibling. OTOH, disabling IBRS around idle takes two
    MSR writes, which will increase the idle latency.
    
    Therefore, only disable IBRS around deeper idle states. Shallow idle
    states are bounded by the tick in duration, since NOHZ is not allowed
    for them by virtue of their short target residency.
    
    Only do this for mwait-driven idle, since that keeps interrupts disabled
    across idle, which makes disabling IBRS vs IRQ-entry a non-issue.
    
    Note: C6 is a random threshold, most importantly C1 probably shouldn't
    disable IBRS, benchmarking needed.
    
    Suggested-by: Tim Chen <tim.c.chen@linux.intel.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git bf5835bcdb96
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 32 ++++++++++++++++++++++++--------
 xen/include/xen/cpuidle.h     |  3 ++-
 2 files changed, 26 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 86c47a04c7..f5c83121a8 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -140,6 +140,12 @@ static const struct cpuidle_state {
  */
 #define CPUIDLE_FLAG_TLB_FLUSHED	0x10000
 
+/*
+ * Disable IBRS across idle (when KERNEL_IBRS), is exclusive vs IRQ_ENABLE
+ * above.
+ */
+#define CPUIDLE_FLAG_IBRS		0x20000
+
 /*
  * MWAIT takes an 8-bit "hint" in EAX "suggesting"
  * the C-state (top nibble) and sub-state (bottom nibble)
@@ -530,31 +536,31 @@ static struct cpuidle_state __read_mostly skl_cstates[] = {
 	},
 	{
 		.name = "C6",
-		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 85,
 		.target_residency = 200,
 	},
 	{
 		.name = "C7s",
-		.flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 124,
 		.target_residency = 800,
 	},
 	{
 		.name = "C8",
-		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 200,
 		.target_residency = 800,
 	},
 	{
 		.name = "C9",
-		.flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 480,
 		.target_residency = 5000,
 	},
 	{
 		.name = "C10",
-		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 890,
 		.target_residency = 5000,
 	},
@@ -576,7 +582,7 @@ static struct cpuidle_state __read_mostly skx_cstates[] = {
 	},
 	{
 		.name = "C6",
-		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 133,
 		.target_residency = 600,
 	},
@@ -906,6 +912,7 @@ static const struct cpuidle_state snr_cstates[] = {
 static void cf_check mwait_idle(void)
 {
 	unsigned int cpu = smp_processor_id();
+	struct cpu_info *info = get_cpu_info();
 	struct acpi_processor_power *power = processor_powers[cpu];
 	struct acpi_processor_cx *cx = NULL;
 	unsigned int next_state;
@@ -932,8 +939,6 @@ static void cf_check mwait_idle(void)
 			pm_idle_save();
 		else
 		{
-			struct cpu_info *info = get_cpu_info();
-
 			spec_ctrl_enter_idle(info);
 			safe_halt();
 			spec_ctrl_exit_idle(info);
@@ -960,6 +965,11 @@ static void cf_check mwait_idle(void)
 	if ((cx->type >= 3) && errata_c6_workaround())
 		cx = power->safe_state;
 
+	if (cx->ibrs_disable) {
+		ASSERT(!cx->irq_enable_early);
+		spec_ctrl_enter_idle(info);
+	}
+
 #if 0 /* XXX Can we/do we need to do something similar on Xen? */
 	/*
 	 * leave_mm() to avoid costly and often unnecessary wakeups
@@ -991,6 +1001,10 @@ static void cf_check mwait_idle(void)
 
 	/* Now back in C0. */
 	update_idle_stats(power, cx, before, after);
+
+	if (cx->ibrs_disable)
+		spec_ctrl_exit_idle(info);
+
 	local_irq_enable();
 
 	TRACE_6D(TRC_PM_IDLE_EXIT, cx->type, after,
@@ -1603,6 +1617,8 @@ static int cf_check mwait_idle_cpu_init(
 		    /* cstate_restore_tsc() needs to be a no-op */
 		    boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
 			cx->irq_enable_early = true;
+		if (cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IBRS)
+			cx->ibrs_disable = true;
 
 		dev->count++;
 	}
diff --git a/xen/include/xen/cpuidle.h b/xen/include/xen/cpuidle.h
index bd24a31e12..521a8deb04 100644
--- a/xen/include/xen/cpuidle.h
+++ b/xen/include/xen/cpuidle.h
@@ -42,7 +42,8 @@ struct acpi_processor_cx
     u8 idx;
     u8 type;         /* ACPI_STATE_Cn */
     u8 entry_method; /* ACPI_CSTATE_EM_xxx */
-    bool irq_enable_early;
+    bool irq_enable_early:1;
+    bool ibrs_disable:1;
     u32 address;
     u32 latency;
     u32 target_residency;
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 13 16:00:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Oct 2022 16:00:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422311.668248 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0dZ-0000EK-Iw; Thu, 13 Oct 2022 16:00:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422311.668248; Thu, 13 Oct 2022 16:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oj0dZ-0000ED-GS; Thu, 13 Oct 2022 16:00:45 +0000
Received: by outflank-mailman (input) for mailman id 422311;
 Thu, 13 Oct 2022 16:00:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dY-0000E5-Nx
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dY-00035C-NF
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oj0dY-0007nX-MO
 for xen-changelog@lists.xenproject.org; Thu, 13 Oct 2022 16:00:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=xDHUBw6t1qDDPpZ7CFIBrue0hsImTVtqZ/gPrIMPdL8=; b=KXvP4y/CucZ3ORT19jP4f1uvRL
	7I1nydnmqs+waD6FUXettDcXqLBqE6woEl92AEfxWArqsNiI/iQkI4nSYn69RB5frDJNQzjA2CUIi
	qM44LLSknIkKjzJWoZuE9Hn4bkSWbJMpeyHpdfsQwhb9/se2ZJmJxwMK3D9kerslQ5Ns=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/mwait-idle: make SPR C1 and C1E be independent
Message-Id: <E1oj0dY-0007nX-MO@xenbits.xenproject.org>
Date: Thu, 13 Oct 2022 16:00:44 +0000

commit 171d4d24f829075cac83b6fafe7a4ed7c93935a6
Author:     Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
AuthorDate: Thu Oct 13 17:56:13 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:56:13 2022 +0200

    x86/mwait-idle: make SPR C1 and C1E be independent
    
    This patch partially reverts the changes made by the following commit:
    
    da0e58c038e6 intel_idle: add 'preferred_cstates' module argument
    
    As that commit describes, on early Sapphire Rapids Xeon platforms the C1 and
    C1E states were mutually exclusive, so that users could only have either C1 and
    C6, or C1E and C6.
    
    However, Intel firmware engineers managed to remove this limitation and make C1
    and C1E to be completely independent, just like on previous Xeon platforms.
    
    Therefore, this patch:
     * Removes commentary describing the old, and now non-existing SPR C1E
       limitation.
     * Marks SPR C1E as available by default.
     * Removes the 'preferred_cstates' parameter handling for SPR. Both C1 and
       C1E will be available regardless of 'preferred_cstates' value.
    
    We expect that all SPR systems are shipping with new firmware, which includes
    the C1/C1E improvement.
    
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 1548fac47a11
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index f5c83121a8..ffdc6fb2fc 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -689,16 +689,6 @@ static struct cpuidle_state __read_mostly adl_l_cstates[] = {
 	{}
 };
 
-/*
- * On Sapphire Rapids Xeon C1 has to be disabled if C1E is enabled, and vice
- * versa. On SPR C1E is enabled only if "C1E promotion" bit is set in
- * MSR_IA32_POWER_CTL. But in this case there effectively no C1, because C1
- * requests are promoted to C1E. If the "C1E promotion" bit is cleared, then
- * both C1 and C1E requests end up with C1, so there is effectively no C1E.
- *
- * By default we enable C1 and disable C1E by marking it with
- * 'CPUIDLE_FLAG_DISABLED'.
- */
 static struct cpuidle_state __read_mostly spr_cstates[] = {
 	{
 		.name = "C1",
@@ -708,7 +698,7 @@ static struct cpuidle_state __read_mostly spr_cstates[] = {
 	},
 	{
 		.name = "C1E",
-		.flags = MWAIT2flg(0x01) | CPUIDLE_FLAG_DISABLED,
+		.flags = MWAIT2flg(0x01),
 		.exit_latency = 2,
 		.target_residency = 4,
 	},
@@ -1401,17 +1391,6 @@ static void __init spr_idle_state_table_update(void)
 {
 	uint64_t msr;
 
-	/* Check if user prefers C1E over C1. */
-	if ((preferred_states_mask & BIT(2, U)) &&
-	    !(preferred_states_mask & BIT(1, U))) {
-		/* Disable C1 and enable C1E. */
-		spr_cstates[0].flags |= CPUIDLE_FLAG_DISABLED;
-		spr_cstates[1].flags &= ~CPUIDLE_FLAG_DISABLED;
-
-		/* Request enabling C1E using the "C1E promotion" bit. */
-		idle_cpu_spr.c1e_promotion = C1E_PROMOTION_ENABLE;
-	}
-
 	/*
 	 * By default, the C6 state assumes the worst-case scenario of package
 	 * C6. However, if PC6 is disabled, we update the numbers to match
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 13:55:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 13:55:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422889.669230 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojL9W-00077m-KM; Fri, 14 Oct 2022 13:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422889.669230; Fri, 14 Oct 2022 13:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojL9W-00077e-HS; Fri, 14 Oct 2022 13:55:06 +0000
Received: by outflank-mailman (input) for mailman id 422889;
 Fri, 14 Oct 2022 13:55:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojL9U-00077G-Rb
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 13:55:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojL9U-0001oy-NX
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 13:55:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojL9U-0006p8-ML
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 13:55:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Kexs7e1gW1fluMuFLqNdpjjupMI0Hucc9s5ldpqIJyE=; b=YcTq7Bm1z1Nyt67cLJ3jrpD58b
	UK8DHXeXwYYv3pJME8li78Egu4niWcpADwPhkkOVFULQtuepnwbUuliP92ErYGUupTnH0meq0xjrw
	PrH6vN917wA5AzJsw8BSGUH4PBCqB1Ldk8bLD/KdmL3OPxuL4JXwYaA9C1amwbummGlk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] argo: Remove reachable ASSERT_UNREACHABLE
Message-Id: <E1ojL9U-0006p8-ML@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 13:55:04 +0000

commit 197f612b77c5afe04e60df2100a855370d720ad7
Author:     Jason Andryuk <jandryuk@gmail.com>
AuthorDate: Fri Oct 7 15:31:24 2022 -0400
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 14:45:41 2022 +0100

    argo: Remove reachable ASSERT_UNREACHABLE
    
    I observed this ASSERT_UNREACHABLE in partner_rings_remove consistently
    trip.  It was in OpenXT with the viptables patch applied.
    
    dom10 shuts down.
    dom7 is REJECTED sending to dom10.
    dom7 shuts down and this ASSERT trips for dom10.
    
    The argo_send_info has a domid, but there is no refcount taken on
    the domain.  Therefore it's not appropriate to ASSERT that the domain
    can be looked up via domid.  Replace with a debug message.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Christopher Clark <christopher.w.clark@gmail.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/argo.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/argo.c b/xen/common/argo.c
index 748b8714d6..9ad2ecaa1e 100644
--- a/xen/common/argo.c
+++ b/xen/common/argo.c
@@ -1298,7 +1298,8 @@ partner_rings_remove(struct domain *src_d)
                     ASSERT_UNREACHABLE();
             }
             else
-                ASSERT_UNREACHABLE();
+                argo_dprintk("%pd has entry for stale partner d%u\n",
+                             src_d, send_info->id.domain_id);
 
             if ( dst_d )
                 rcu_unlock_domain(dst_d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 15:22:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 15:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422946.669341 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMVi-0003Xy-33; Fri, 14 Oct 2022 15:22:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422946.669341; Fri, 14 Oct 2022 15:22:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMVh-0003Xr-Vc; Fri, 14 Oct 2022 15:22:05 +0000
Received: by outflank-mailman (input) for mailman id 422946;
 Fri, 14 Oct 2022 15:22:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMVg-0003Xb-Fm
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMVg-0003Qz-Ev
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMVg-0002ml-E6
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=wA4Yt7KeRnQiKNQ+zpzrXytaormriLZrnI7zq07MegA=; b=6r6MljJCCMouhG1Zl+5bfD6BRZ
	ZeDtimatOYvlvCdctj/PRjniI07d1j3sqNe00vpY2sNVVPqMzIyu4AHtI3mPWFVFGzypB7U/E98G8
	6IIRvyoWfLbvRK24QFxUU8rS6mbzFphcKtbeH30oFeu/H+E5SlbHEIdObm5gS9c2lqFM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/debugger/gdbsx: Fix and cleanup makefiles
Message-Id: <E1ojMVg-0002ml-E6@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 15:22:04 +0000

commit 3a206abcd7f77bbbf0da24547e1d889c4d2789c7
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:04:57 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools/debugger/gdbsx: Fix and cleanup makefiles
    
    gdbsx/:
      - Make use of subdir facility for the "clean" target.
      - No need to remove the *.a, they aren't in this dir.
      - Avoid calling "distclean" in subdirs as "distclean" targets do only
        call "clean", and the "clean" also runs "clean" in subdirs.
      - Avoid the need to make "gx_all.a" and "xg_all.a" in the "all"
        recipe by forcing make to check for update of "xg/xg_all.a" and
        "gx/gx_all.a" by having "FORCE" as prerequisite. Now, when making
        "gdbsx", make will recurse even when both *.a already exist.
      - List target in $(TARGETS).
    
    gdbsx/*/:
      - Fix dependency on *.h.
      - Remove some dead code.
      - List targets in $(TARGETS).
      - Remove "build" target.
      - Cleanup "clean" targets.
      - remove comments about the choice of "ar" instead of "ld"
      - Use "$(AR)" instead of plain "ar".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/debugger/gdbsx/Makefile    | 20 ++++++++++----------
 tools/debugger/gdbsx/gx/Makefile | 15 +++++++--------
 tools/debugger/gdbsx/xg/Makefile | 25 +++++++------------------
 3 files changed, 24 insertions(+), 36 deletions(-)

diff --git a/tools/debugger/gdbsx/Makefile b/tools/debugger/gdbsx/Makefile
index 5571450a89..4aaf427c45 100644
--- a/tools/debugger/gdbsx/Makefile
+++ b/tools/debugger/gdbsx/Makefile
@@ -1,20 +1,20 @@
 XEN_ROOT = $(CURDIR)/../../..
 include ./Rules.mk
 
+SUBDIRS-y += gx
+SUBDIRS-y += xg
+
+TARGETS := gdbsx
+
 .PHONY: all
-all:
-	$(MAKE) -C gx
-	$(MAKE) -C xg
-	$(MAKE) gdbsx
+all: $(TARGETS)
 
 .PHONY: clean
-clean:
-	rm -f xg_all.a gx_all.a gdbsx
-	set -e; for d in xg gx; do $(MAKE) -C $$d clean; done
+clean: subdirs-clean
+	rm -f $(TARGETS)
 
 .PHONY: distclean
 distclean: clean
-	set -e; for d in xg gx; do $(MAKE) -C $$d distclean; done
 
 .PHONY: install
 install: all
@@ -28,7 +28,7 @@ uninstall:
 gdbsx: gx/gx_all.a xg/xg_all.a 
 	$(CC) $(LDFLAGS) -o $@ $^
 
-xg/xg_all.a:
+xg/xg_all.a: FORCE
 	$(MAKE) -C xg
-gx/gx_all.a:
+gx/gx_all.a: FORCE
 	$(MAKE) -C gx
diff --git a/tools/debugger/gdbsx/gx/Makefile b/tools/debugger/gdbsx/gx/Makefile
index 3b8467f799..e9859aea9c 100644
--- a/tools/debugger/gdbsx/gx/Makefile
+++ b/tools/debugger/gdbsx/gx/Makefile
@@ -2,21 +2,20 @@ XEN_ROOT = $(CURDIR)/../../../..
 include ../Rules.mk
 
 GX_OBJS := gx_comm.o gx_main.o gx_utils.o gx_local.o
-GX_HDRS := $(wildcard *.h)
+
+TARGETS := gx_all.a
 
 .PHONY: all
-all: gx_all.a
+all: $(TARGETS)
 
 .PHONY: clean
 clean:
-	rm -rf gx_all.a *.o .*.d
+	rm -f *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
 
-#%.o: %.c $(GX_HDRS) Makefile
-#	$(CC) -c $(CFLAGS) -o $@ $<
-
-gx_all.a: $(GX_OBJS) Makefile $(GX_HDRS)
-	ar cr $@ $(GX_OBJS)        # problem with ld using -m32 
+gx_all.a: $(GX_OBJS) Makefile
+	$(AR) cr $@ $(GX_OBJS)
 
+-include $(DEPS_INCLUDE)
diff --git a/tools/debugger/gdbsx/xg/Makefile b/tools/debugger/gdbsx/xg/Makefile
index acdcddf0d5..05325d6d81 100644
--- a/tools/debugger/gdbsx/xg/Makefile
+++ b/tools/debugger/gdbsx/xg/Makefile
@@ -1,35 +1,24 @@
 XEN_ROOT = $(CURDIR)/../../../..
 include ../Rules.mk
 
-XG_HDRS := xg_public.h 
 XG_OBJS := xg_main.o 
 
 CFLAGS += -D__XEN_TOOLS__
 CFLAGS += $(CFLAGS_xeninclude)
 
+TARGETS := xg_all.a
 
 .PHONY: all
-all: build
+all: $(TARGETS)
 
-.PHONY: build
-build: xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
-# build: mk-symlinks xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
-# build: mk-symlinks xg_all.a
-
-xg_all.a: $(XG_OBJS) Makefile $(XG_HDRS)
-	ar cr $@ $(XG_OBJS)    # problems using -m32 in ld 
-#	$(LD) -b elf32-i386 $(LDFLAGS) -r -o $@ $^
-#	$(CC) -m32 -c -o $@ $^
-
-# xg_main.o: xg_main.c Makefile $(XG_HDRS)
-#$(CC) -c $(CFLAGS) -o $@ $<
-
-# %.o: %.c $(XG_HDRS) Makefile  -- doesn't work as it won't overwrite Rules.mk
-#%.o: %.c       -- doesn't recompile when .c changed
+xg_all.a: $(XG_OBJS) Makefile
+	$(AR) cr $@ $(XG_OBJS)
 
 .PHONY: clean
 clean:
-	rm -rf xen xg_all.a $(XG_OBJS)  .*.d
+	rm -f $(TARGETS) $(XG_OBJS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
+
+-include $(DEPS_INCLUDE)
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 15:22:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 15:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422950.669344 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMVs-0003fm-49; Fri, 14 Oct 2022 15:22:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422950.669344; Fri, 14 Oct 2022 15:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMVs-0003ff-15; Fri, 14 Oct 2022 15:22:16 +0000
Received: by outflank-mailman (input) for mailman id 422950;
 Fri, 14 Oct 2022 15:22:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMVq-0003fL-IU
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMVq-0003RA-Hg
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMVq-0002nC-Gt
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=nqSOAqIeARd2TLi+19WkOTrCyfO/ecnuPSYHKOZ9SCo=; b=2Y6FA06rJG85k5vNKd+HgBRK1L
	qETw56AzjS61uYzzIxfqAGlc15srsI3b6XymodefVUxoZh4iImYUPVcwyZetljiwkzgnsvHhGA5iw
	Ri7lZtloVOi8+swEzGnLse8PiYMmo36rIpn+B9oU8FhAMtP4XHhyI5Y+GipWJN41onZg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/xentrace: rework Makefile
Message-Id: <E1ojMVq-0002nC-Gt@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 15:22:14 +0000

commit a2e8156ba49d699db3d2e36df21c8f57c832de77
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:04:58 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools/xentrace: rework Makefile
    
    Remove "build" targets.
    
    Use "$(TARGETS)" to list binary to be built.
    
    Cleanup "clean" rule.
    
    Also drop conditional install of $(BIN) and $(LIBBIN) as those two
    variables are now always populated.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/xentrace/Makefile | 27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

diff --git a/tools/xentrace/Makefile b/tools/xentrace/Makefile
index 9fb7fc96e7..63f2f6532d 100644
--- a/tools/xentrace/Makefile
+++ b/tools/xentrace/Makefile
@@ -9,41 +9,36 @@ LDLIBS += $(LDLIBS_libxenevtchn)
 LDLIBS += $(LDLIBS_libxenctrl)
 LDLIBS += $(ARGP_LDFLAGS)
 
-BIN      = xenalyze
-SBIN     = xentrace xentrace_setsize
-LIBBIN   = xenctx
-SCRIPTS  = xentrace_format
+BIN     := xenalyze
+SBIN    := xentrace xentrace_setsize
+LIBBIN  := xenctx
+SCRIPTS := xentrace_format
 
-.PHONY: all
-all: build
+TARGETS := $(BIN) $(SBIN) $(LIBBIN)
 
-.PHONY: build
-build: $(BIN) $(SBIN) $(LIBBIN)
+.PHONY: all
+all: $(TARGETS)
 
 .PHONY: install
-install: build
+install: all
 	$(INSTALL_DIR) $(DESTDIR)$(bindir)
 	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
-	[ -z "$(LIBBIN)" ] || $(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-ifneq ($(BIN),)
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
 	$(INSTALL_PROG) $(BIN) $(DESTDIR)$(bindir)
-endif
 	$(INSTALL_PROG) $(SBIN) $(DESTDIR)$(sbindir)
 	$(INSTALL_PYTHON_PROG) $(SCRIPTS) $(DESTDIR)$(bindir)
-	[ -z "$(LIBBIN)" ] || $(INSTALL_PROG) $(LIBBIN) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_PROG) $(LIBBIN) $(DESTDIR)$(LIBEXEC_BIN)
 
 .PHONY: uninstall
 uninstall:
 	rm -f $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/, $(LIBBIN))
 	rm -f $(addprefix $(DESTDIR)$(bindir)/, $(SCRIPTS))
 	rm -f $(addprefix $(DESTDIR)$(sbindir)/, $(SBIN))
-ifneq ($(BIN),)
 	rm -f $(addprefix $(DESTDIR)$(bindir)/, $(BIN))
-endif
 
 .PHONY: clean
 clean:
-	$(RM) *.a *.so *.o *.rpm $(BIN) $(SBIN) $(LIBBIN) $(DEPS_RM)
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 15:22:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 15:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422952.669348 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMW2-0003lU-5P; Fri, 14 Oct 2022 15:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422952.669348; Fri, 14 Oct 2022 15:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMW2-0003lL-2b; Fri, 14 Oct 2022 15:22:26 +0000
Received: by outflank-mailman (input) for mailman id 422952;
 Fri, 14 Oct 2022 15:22:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMW0-0003l1-Lh
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMW0-0003RN-Kx
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMW0-0002ns-K4
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=3nI0cC9t69izBxVZl+unyd+CyQobN1UqiZtxopAMSzA=; b=jBsvZqvEtSpgO6WsVcokxUGUHd
	97jDTyURzdIlu7srMWr1TGaPt7nkNPzmqcXFBo9JiC/ehT9z9wgkb2pVPlhYyrZ3/hdLMHJcpgIbW
	h3UA1LAsrnOFlUxvDF3gmbQ8pc8TSDluv+NmaI9BWDfVRJts8K9UIQ3i2CkHM7uFTS84=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools: Introduce $(xenlibs-ldflags, ) macro
Message-Id: <E1ojMW0-0002ns-K4@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 15:22:24 +0000

commit fcdb9cdb953d6c1f893286c3619e74f72e1327fc
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:04:59 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools: Introduce $(xenlibs-ldflags, ) macro
    
    This avoid the need to open-coding the list of flags needed to link
    with an in-tree Xen library when using -lxen*.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/Rules.mk                 | 8 ++++++++
 tools/golang/xenlight/Makefile | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index ce77dd2eb1..26958b2948 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -105,6 +105,14 @@ define xenlibs-ldlibs
     $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
 endef
 
+# Provide needed flags for linking an in-tree Xen library by an external
+# project (or when it is necessary to link with "-lxen$(1)" instead of using
+# the full path to the library).
+define xenlibs-ldflags
+    $(call xenlibs-rpath,$(1)) \
+    $(foreach lib,$(1),-L$(XEN_ROOT)/tools/libs/$(lib))
+endef
+
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 64671f246c..00e6d17f2b 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -27,7 +27,7 @@ GOXL_GEN_FILES = types.gen.go helpers.gen.go
 # so that it can find the actual library.
 .PHONY: build
 build: xenlight.go $(GOXL_GEN_FILES)
-	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog) $(APPEND_CFLAGS)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_libxenlight) -L$(XEN_libxentoollog) $(APPEND_LDFLAGS)" $(GO) build -x
+	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog) $(APPEND_CFLAGS)" CGO_LDFLAGS="$(call xenlibs-ldflags,light toollog) $(APPEND_LDFLAGS)" $(GO) build -x
 
 .PHONY: install
 install: build
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 15:22:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 15:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422953.669352 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMWC-0003oI-6t; Fri, 14 Oct 2022 15:22:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422953.669352; Fri, 14 Oct 2022 15:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMWC-0003oA-4F; Fri, 14 Oct 2022 15:22:36 +0000
Received: by outflank-mailman (input) for mailman id 422953;
 Fri, 14 Oct 2022 15:22:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMWA-0003no-PA
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMWA-0003Rd-ON
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMWA-0002oL-Nb
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=LNBPezthyPXONJAksNwmHCEoYN9sugIpf8fmU6XcgyE=; b=3e/02QrlIbDMj61+w3gK/Bt4o2
	6mgFakzsXs4lHrEOlcJB4TMKeC19fv3YHLRBOSXnwEcFdEMLS8jYzbIg6H+Ldlo+FvuH3y6TO8xqc
	qT71IOTUiXSr6V/TkACm8shB7oFty6I5kRozKDs8wsVtEaGiaH6DeiBX8c/EuLeMi924=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools: Add -Werror by default to all tools/
Message-Id: <E1ojMWA-0002oL-Nb@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 15:22:34 +0000

commit e4f5949c446635a854f06317b81db11cccfdabee
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:00 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools: Add -Werror by default to all tools/
    
    And provide an option to ./configure to disable it.
    
    A follow-up patch will remove -Werror from every other Makefile in
    tools/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 config/Tools.mk.in |  1 +
 tools/Rules.mk     |  4 ++++
 tools/configure    | 26 ++++++++++++++++++++++++++
 tools/configure.ac |  1 +
 4 files changed, 32 insertions(+)

diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 6c1a0a676f..d0d460f922 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -1,5 +1,6 @@
 -include $(XEN_ROOT)/config/Paths.mk
 
+CONFIG_WERROR       := @werror@
 CONFIG_RUMP         := @CONFIG_RUMP@
 ifeq ($(CONFIG_RUMP),y)
 XEN_OS              := NetBSDRump
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 26958b2948..a165dc4bda 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -133,6 +133,10 @@ endif
 
 CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
 
+ifeq ($(CONFIG_WERROR),y)
+CFLAGS += -Werror
+endif
+
 ifeq ($(debug),y)
 # Use -Og if available, -O0 otherwise
 dbg_opt_level := $(call cc-option,$(CC),-Og,-O0)
diff --git a/tools/configure b/tools/configure
index 41deb7fb96..acd9a04c3b 100755
--- a/tools/configure
+++ b/tools/configure
@@ -716,6 +716,7 @@ ocamltools
 monitors
 githttp
 rpath
+werror
 DEBUG_DIR
 XEN_DUMP_DIR
 XEN_PAGING_DIR
@@ -805,6 +806,7 @@ with_xen_scriptdir
 with_xen_dumpdir
 with_rundir
 with_debugdir
+enable_werror
 enable_rpath
 enable_githttp
 enable_monitors
@@ -1490,6 +1492,7 @@ Optional Features:
   --disable-FEATURE       do not include FEATURE (same as --enable-FEATURE=no)
   --enable-FEATURE[=ARG]  include FEATURE [ARG=yes]
   --disable-largefile     omit support for large files
+  --disable-werror        Build tools without -Werror (default is ENABLED)
   --enable-rpath          Build tools with -Wl,-rpath,LIBDIR (default is
                           DISABLED)
   --enable-githttp        Download GIT repositories via HTTP (default is
@@ -4111,6 +4114,29 @@ DEBUG_DIR=$debugdir_path
 
 # Enable/disable options
 
+# Check whether --enable-werror was given.
+if test "${enable_werror+set}" = set; then :
+  enableval=$enable_werror;
+fi
+
+
+if test "x$enable_werror" = "xno"; then :
+
+    ax_cv_werror="n"
+
+elif test "x$enable_werror" = "xyes"; then :
+
+    ax_cv_werror="y"
+
+elif test -z $ax_cv_werror; then :
+
+    ax_cv_werror="y"
+
+fi
+werror=$ax_cv_werror
+
+
+
 # Check whether --enable-rpath was given.
 if test "${enable_rpath+set}" = set; then :
   enableval=$enable_rpath;
diff --git a/tools/configure.ac b/tools/configure.ac
index 32cbe6bd3c..09059bc569 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -81,6 +81,7 @@ m4_include([../m4/header.m4])
 AX_XEN_EXPAND_CONFIG()
 
 # Enable/disable options
+AX_ARG_DEFAULT_ENABLE([werror], [Build tools without -Werror])
 AX_ARG_DEFAULT_DISABLE([rpath], [Build tools with -Wl,-rpath,LIBDIR])
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
 AX_ARG_DEFAULT_ENABLE([monitors], [Disable xenstat and xentop monitoring tools])
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 15:22:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 15:22:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.422954.669357 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMWM-0003rC-8o; Fri, 14 Oct 2022 15:22:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 422954.669357; Fri, 14 Oct 2022 15:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojMWM-0003r4-5o; Fri, 14 Oct 2022 15:22:46 +0000
Received: by outflank-mailman (input) for mailman id 422954;
 Fri, 14 Oct 2022 15:22:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMWK-0003qo-S5
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMWK-0003Rn-RH
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojMWK-0002ok-Qd
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 15:22:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=HWtz+ZVf5iePHZJTYvoV0ehSsjG8sr0yye5fEzGq6h8=; b=MlRD3XNcEyEIxD4ZC6fdBYFfRw
	TOeGeMp4ZF5ZDFgNaZd+DhapP2MgRTM/t40EGEOO08Rgwk47KEB7ehhFawRL2kDuZNEbrBX03VMVo
	b2Pgp49isEIoYDaRt5oKxV4tfVxU7V+GHstziIMF85TnJ46w5gasbB5RMisp2f0JFtt4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools: Remove -Werror everywhere else
Message-Id: <E1ojMWK-0002ok-Qd@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 15:22:44 +0000

commit 40d96f0c7d5399f9b824926279d41ead974fbe39
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:01 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:17:41 2022 +0100

    tools: Remove -Werror everywhere else
    
    The previous changeset, e4f5949c4466 ("tools: Add -Werror by default to all
    tools/"), added "-Werror" to CFLAGS in tools/Rules.mk.  Remove it from
    everywhere else now it is duplicated.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Daniel P. Smith <dpsmith@apertussolutions.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/console/client/Makefile   | 1 -
 tools/console/daemon/Makefile   | 1 -
 tools/debugger/gdbsx/Rules.mk   | 2 +-
 tools/debugger/kdd/Makefile     | 1 -
 tools/firmware/Rules.mk         | 2 --
 tools/flask/utils/Makefile      | 1 -
 tools/fuzz/cpu-policy/Makefile  | 2 +-
 tools/libfsimage/common.mk      | 2 +-
 tools/libs/libs.mk              | 2 +-
 tools/misc/Makefile             | 1 -
 tools/ocaml/common.make         | 2 +-
 tools/pygrub/setup.py           | 2 +-
 tools/python/setup.py           | 2 +-
 tools/tests/cpu-policy/Makefile | 2 +-
 tools/tests/depriv/Makefile     | 2 +-
 tools/tests/resource/Makefile   | 1 -
 tools/tests/tsx/Makefile        | 1 -
 tools/tests/xenstore/Makefile   | 1 -
 tools/xcutils/Makefile          | 2 --
 tools/xenmon/Makefile           | 1 -
 tools/xenpaging/Makefile        | 1 -
 tools/xenpmd/Makefile           | 1 -
 tools/xenstore/Makefile.common  | 1 -
 tools/xentop/Makefile           | 2 +-
 tools/xentrace/Makefile         | 2 --
 tools/xl/Makefile               | 2 +-
 26 files changed, 11 insertions(+), 29 deletions(-)

diff --git a/tools/console/client/Makefile b/tools/console/client/Makefile
index e2f2554f92..62d89fdeb9 100644
--- a/tools/console/client/Makefile
+++ b/tools/console/client/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/console/daemon/Makefile b/tools/console/daemon/Makefile
index 99bb33b6a2..9fc3b6711f 100644
--- a/tools/console/daemon/Makefile
+++ b/tools/console/daemon/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += $(CFLAGS_libxenevtchn)
diff --git a/tools/debugger/gdbsx/Rules.mk b/tools/debugger/gdbsx/Rules.mk
index 920f1c87fb..1f631b62da 100644
--- a/tools/debugger/gdbsx/Rules.mk
+++ b/tools/debugger/gdbsx/Rules.mk
@@ -1,6 +1,6 @@
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS   += -Werror -Wmissing-prototypes 
+CFLAGS   += -Wmissing-prototypes
 # (gcc 4.3x and later)   -Wconversion -Wno-sign-conversion
 
 CFLAGS-$(clang) += -Wno-ignored-attributes
diff --git a/tools/debugger/kdd/Makefile b/tools/debugger/kdd/Makefile
index 26116949d4..a72ad3b1e0 100644
--- a/tools/debugger/kdd/Makefile
+++ b/tools/debugger/kdd/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS  += -Werror
 CFLAGS  += $(CFLAGS_libxenctrl)
 CFLAGS  += -DXC_WANT_COMPAT_MAP_FOREIGN_API
 LDLIBS  += $(LDLIBS_libxenctrl)
diff --git a/tools/firmware/Rules.mk b/tools/firmware/Rules.mk
index 278cca01e4..d3482c9ec4 100644
--- a/tools/firmware/Rules.mk
+++ b/tools/firmware/Rules.mk
@@ -11,8 +11,6 @@ ifneq ($(debug),y)
 CFLAGS += -DNDEBUG
 endif
 
-CFLAGS += -Werror
-
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
 $(call cc-option-add,CFLAGS,CC,-fcf-protection=none)
diff --git a/tools/flask/utils/Makefile b/tools/flask/utils/Makefile
index 6be134142a..88d7edb6b1 100644
--- a/tools/flask/utils/Makefile
+++ b/tools/flask/utils/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 
 TARGETS := flask-loadpolicy flask-setenforce flask-getenforce flask-label-pci flask-get-bool flask-set-bool
diff --git a/tools/fuzz/cpu-policy/Makefile b/tools/fuzz/cpu-policy/Makefile
index 41a2230408..6e7743e0aa 100644
--- a/tools/fuzz/cpu-policy/Makefile
+++ b/tools/fuzz/cpu-policy/Makefile
@@ -17,7 +17,7 @@ install: all
 
 .PHONY: uninstall
 
-CFLAGS += -Werror $(CFLAGS_xeninclude) -D__XEN_TOOLS__
+CFLAGS += $(CFLAGS_xeninclude) -D__XEN_TOOLS__
 CFLAGS += $(APPEND_CFLAGS) -Og
 
 vpath %.c ../../../xen/lib/x86
diff --git a/tools/libfsimage/common.mk b/tools/libfsimage/common.mk
index 77bc957f27..4fc8c66795 100644
--- a/tools/libfsimage/common.mk
+++ b/tools/libfsimage/common.mk
@@ -2,7 +2,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 FSDIR := $(libdir)/xenfsimage
 CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/ -DFSIMAGE_FSDIR=\"$(FSDIR)\"
-CFLAGS += -Werror -D_GNU_SOURCE
+CFLAGS += -D_GNU_SOURCE
 LDFLAGS += -L../common/
 
 PIC_OBJS = $(patsubst %.c,%.opic,$(LIB_SRCS-y))
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 2b8e7a6128..e47fb30ed4 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -14,7 +14,7 @@ MINOR ?= 0
 
 SHLIB_LDFLAGS += -Wl,--version-script=libxen$(LIBNAME).map
 
-CFLAGS   += -Werror -Wmissing-prototypes
+CFLAGS   += -Wmissing-prototypes
 CFLAGS   += $(CFLAGS_xeninclude)
 CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
 
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 0e02401227..1c6e1d6a04 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += $(CFLAGS_libxenevtchn)
diff --git a/tools/ocaml/common.make b/tools/ocaml/common.make
index d5478f626f..0c8a597d5b 100644
--- a/tools/ocaml/common.make
+++ b/tools/ocaml/common.make
@@ -9,7 +9,7 @@ OCAMLLEX ?= ocamllex
 OCAMLYACC ?= ocamlyacc
 OCAMLFIND ?= ocamlfind
 
-CFLAGS += -fPIC -Werror -I$(shell ocamlc -where)
+CFLAGS += -fPIC -I$(shell ocamlc -where)
 
 OCAMLOPTFLAG_G := $(shell $(OCAMLOPT) -h 2>&1 | sed -n 's/^  *\(-g\) .*/\1/p')
 OCAMLOPTFLAGS = $(OCAMLOPTFLAG_G) -ccopt "$(LDFLAGS)" -dtypes $(OCAMLINCLUDE) -cc $(CC) -w F -warn-error F
diff --git a/tools/pygrub/setup.py b/tools/pygrub/setup.py
index b8f1dc4590..0e4e3d02d3 100644
--- a/tools/pygrub/setup.py
+++ b/tools/pygrub/setup.py
@@ -3,7 +3,7 @@ from distutils.ccompiler import new_compiler
 import os
 import sys
 
-extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
+extra_compile_args  = [ "-fno-strict-aliasing" ]
 
 XEN_ROOT = "../.."
 
diff --git a/tools/python/setup.py b/tools/python/setup.py
index 8c95db7769..721a3141d7 100644
--- a/tools/python/setup.py
+++ b/tools/python/setup.py
@@ -8,7 +8,7 @@ SHLIB_libxenctrl = os.environ['SHLIB_libxenctrl'].split()
 SHLIB_libxenguest = os.environ['SHLIB_libxenguest'].split()
 SHLIB_libxenstore = os.environ['SHLIB_libxenstore'].split()
 
-extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
+extra_compile_args  = [ "-fno-strict-aliasing" ]
 
 PATH_XEN      = XEN_ROOT + "/tools/include"
 PATH_LIBXENTOOLLOG = XEN_ROOT + "/tools/libs/toollog"
diff --git a/tools/tests/cpu-policy/Makefile b/tools/tests/cpu-policy/Makefile
index 93af9d76fa..c5b81afc71 100644
--- a/tools/tests/cpu-policy/Makefile
+++ b/tools/tests/cpu-policy/Makefile
@@ -36,7 +36,7 @@ install: all
 uninstall:
 	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror -D__XEN_TOOLS__
+CFLAGS += -D__XEN_TOOLS__
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(APPEND_CFLAGS)
 
diff --git a/tools/tests/depriv/Makefile b/tools/tests/depriv/Makefile
index 3cba28da25..7d9e3b01bb 100644
--- a/tools/tests/depriv/Makefile
+++ b/tools/tests/depriv/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror -Wno-declaration-after-statement
+CFLAGS += -Wno-declaration-after-statement
 
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
index b3cd70c06d..a5856bf095 100644
--- a/tools/tests/resource/Makefile
+++ b/tools/tests/resource/Makefile
@@ -27,7 +27,6 @@ install: all
 uninstall:
 	$(RM) -- $(DESTDIR)$(LIBEXEC_BIN)/$(TARGET)
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenforeginmemory)
diff --git a/tools/tests/tsx/Makefile b/tools/tests/tsx/Makefile
index d7d2a5d95e..a4f516b725 100644
--- a/tools/tests/tsx/Makefile
+++ b/tools/tests/tsx/Makefile
@@ -26,7 +26,6 @@ uninstall:
 .PHONY: uninstall
 uninstall:
 
-CFLAGS += -Werror
 CFLAGS += -I$(XEN_ROOT)/tools/libs/ctrl -I$(XEN_ROOT)/tools/libs/guest
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
diff --git a/tools/tests/xenstore/Makefile b/tools/tests/xenstore/Makefile
index 239e1dce47..202dda0d3c 100644
--- a/tools/tests/xenstore/Makefile
+++ b/tools/tests/xenstore/Makefile
@@ -27,7 +27,6 @@ install: all
 uninstall:
 	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += $(APPEND_CFLAGS)
 
diff --git a/tools/xcutils/Makefile b/tools/xcutils/Makefile
index e40a2c4bfa..3687f6cd8f 100644
--- a/tools/xcutils/Makefile
+++ b/tools/xcutils/Makefile
@@ -13,8 +13,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 TARGETS := readnotes lsevtchn
 
-CFLAGS += -Werror
-
 CFLAGS_readnotes.o  := $(CFLAGS_libxenevtchn) $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest)
 CFLAGS_lsevtchn.o   := $(CFLAGS_libxenevtchn) $(CFLAGS_libxenctrl)
 
diff --git a/tools/xenmon/Makefile b/tools/xenmon/Makefile
index 3e150b0659..679c4b41a3 100644
--- a/tools/xenmon/Makefile
+++ b/tools/xenmon/Makefile
@@ -13,7 +13,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS  += -Werror
 CFLAGS  += $(CFLAGS_libxenevtchn)
 CFLAGS  += $(CFLAGS_libxenctrl)
 LDLIBS  += $(LDLIBS_libxenctrl)
diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index e2ed9eaa3f..835cf2b965 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -12,7 +12,6 @@ OBJS-y   += xenpaging.o
 OBJS-y   += policy_$(POLICY).o
 OBJS-y   += pagein.o
 
-CFLAGS   += -Werror
 CFLAGS   += -Wno-unused
 
 TARGETS := xenpaging
diff --git a/tools/xenpmd/Makefile b/tools/xenpmd/Makefile
index e0d3f06ab2..8da20510b5 100644
--- a/tools/xenpmd/Makefile
+++ b/tools/xenpmd/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenstore)
 
 LDLIBS += $(LDLIBS_libxenstore)
diff --git a/tools/xenstore/Makefile.common b/tools/xenstore/Makefile.common
index 21b78b0538..ddbac052ac 100644
--- a/tools/xenstore/Makefile.common
+++ b/tools/xenstore/Makefile.common
@@ -9,7 +9,6 @@ XENSTORED_OBJS-$(CONFIG_NetBSD) += xenstored_posix.o
 XENSTORED_OBJS-$(CONFIG_FreeBSD) += xenstored_posix.o
 XENSTORED_OBJS-$(CONFIG_MiniOS) += xenstored_minios.o
 
-CFLAGS += -Werror
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += -I./include
diff --git a/tools/xentop/Makefile b/tools/xentop/Makefile
index 7bd96f34d5..70cc2211c5 100644
--- a/tools/xentop/Makefile
+++ b/tools/xentop/Makefile
@@ -13,7 +13,7 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -DGCC_PRINTF -Werror $(CFLAGS_libxenstat)
+CFLAGS += -DGCC_PRINTF $(CFLAGS_libxenstat)
 LDLIBS += $(LDLIBS_libxenstat) $(CURSES_LIBS) $(TINFO_LIBS) $(SOCKET_LIBS) -lm
 CFLAGS += -DHOST_$(XEN_OS)
 
diff --git a/tools/xentrace/Makefile b/tools/xentrace/Makefile
index 63f2f6532d..d50d400472 100644
--- a/tools/xentrace/Makefile
+++ b/tools/xentrace/Makefile
@@ -1,8 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
-
 CFLAGS += $(CFLAGS_libxenevtchn)
 CFLAGS += $(CFLAGS_libxenctrl)
 LDLIBS += $(LDLIBS_libxenevtchn)
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index b7f439121a..5f7aa5f46c 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -5,7 +5,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror -Wno-format-zero-length -Wmissing-declarations \
+CFLAGS += -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
 CFLAGS += -fPIC
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:11:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423099.669549 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1N-0005sn-1q; Fri, 14 Oct 2022 20:11:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423099.669549; Fri, 14 Oct 2022 20:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1M-0005sf-VV; Fri, 14 Oct 2022 20:11:04 +0000
Received: by outflank-mailman (input) for mailman id 423099;
 Fri, 14 Oct 2022 20:11:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1M-0005sR-CO
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1M-0000fk-9J
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1M-0004C8-8I
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=bS7Wfrq1g4twCdSc9jX8z9DoTImq9dIPCqADqBtcZ9Q=; b=WqJZ5I/jqXF8mZAXHHX+O5fGmS
	KEcz2dfJ93IHgCgHcCY42Rly1dsf6+K+792K5hjld5nBdgUU4RikKVHlxszQnxvWeZi2mDjWCBPHq
	GqWtpHNMuQ9jaUby/M9iFrEElUNqyk9gwmxtx9dKxxKGvI6HA/fc8Tq5wOnqpPhhhAZY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/hotplug: Generate "hotplugpath.sh" with configure
Message-Id: <E1ojR1M-0004C8-8I@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:11:04 +0000

commit f3fae4184fb2e90b715f7361f7bd4f37f400587f
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:02 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools/hotplug: Generate "hotplugpath.sh" with configure
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/configure                        |  3 ++-
 tools/configure.ac                     |  1 +
 tools/hotplug/common/Makefile          | 10 ++--------
 tools/hotplug/common/hotplugpath.sh.in | 16 ++++++++++++++++
 4 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/tools/configure b/tools/configure
index acd9a04c3b..6199823f5a 100755
--- a/tools/configure
+++ b/tools/configure
@@ -2456,7 +2456,7 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
 
 
 
-ac_config_files="$ac_config_files ../config/Tools.mk hotplug/FreeBSD/rc.d/xencommons hotplug/FreeBSD/rc.d/xendriverdomain hotplug/Linux/init.d/sysconfig.xencommons hotplug/Linux/init.d/sysconfig.xendomains hotplug/Linux/init.d/xen-watchdog hotplug/Linux/init.d/xencommons hotplug/Linux/init.d/xendomains hotplug/Linux/init.d/xendriverdomain hotplug/Linux/launch-xenstore hotplug/Linux/vif-setup hotplug/Linux/xen-hotplug-common.sh hotplug/Linux/xendomains hotplug/NetBSD/rc.d/xencommons hotplug/NetBSD/rc.d/xendriverdomain ocaml/libs/xs/paths.ml ocaml/xenstored/paths.ml ocaml/xenstored/oxenstored.conf"
+ac_config_files="$ac_config_files ../config/Tools.mk hotplug/common/hotplugpath.sh hotplug/FreeBSD/rc.d/xencommons hotplug/FreeBSD/rc.d/xendriverdomain hotplug/Linux/init.d/sysconfig.xencommons hotplug/Linux/init.d/sysconfig.xendomains hotplug/Linux/init.d/xen-watchdog hotplug/Linux/init.d/xencommons hotplug/Linux/init.d/xendomains hotplug/Linux/init.d/xendriverdomain hotplug/Linux/launch-xenstore hotplug/Linux/vif-setup hotplug/Linux/xen-hotplug-common.sh hotplug/Linux/xendomains hotplug/NetBSD/rc.d/xencommons hotplug/NetBSD/rc.d/xendriverdomain ocaml/libs/xs/paths.ml ocaml/xenstored/paths.ml ocaml/xenstored/oxenstored.conf"
 
 ac_config_headers="$ac_config_headers config.h"
 
@@ -10947,6 +10947,7 @@ for ac_config_target in $ac_config_targets
 do
   case $ac_config_target in
     "../config/Tools.mk") CONFIG_FILES="$CONFIG_FILES ../config/Tools.mk" ;;
+    "hotplug/common/hotplugpath.sh") CONFIG_FILES="$CONFIG_FILES hotplug/common/hotplugpath.sh" ;;
     "hotplug/FreeBSD/rc.d/xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/FreeBSD/rc.d/xencommons" ;;
     "hotplug/FreeBSD/rc.d/xendriverdomain") CONFIG_FILES="$CONFIG_FILES hotplug/FreeBSD/rc.d/xendriverdomain" ;;
     "hotplug/Linux/init.d/sysconfig.xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/sysconfig.xencommons" ;;
diff --git a/tools/configure.ac b/tools/configure.ac
index 09059bc569..18e481d77e 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -7,6 +7,7 @@ AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Makefile]),
 AC_CONFIG_SRCDIR([libs/light/libxl.c])
 AC_CONFIG_FILES([
 ../config/Tools.mk
+hotplug/common/hotplugpath.sh
 hotplug/FreeBSD/rc.d/xencommons
 hotplug/FreeBSD/rc.d/xendriverdomain
 hotplug/Linux/init.d/sysconfig.xencommons
diff --git a/tools/hotplug/common/Makefile b/tools/hotplug/common/Makefile
index e8a8dbea6c..62afe1019e 100644
--- a/tools/hotplug/common/Makefile
+++ b/tools/hotplug/common/Makefile
@@ -1,19 +1,14 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-HOTPLUGPATH := hotplugpath.sh
-
 # OS-independent hotplug scripts go in this directory
 
 # Xen scripts to go there.
 XEN_SCRIPTS :=
-XEN_SCRIPT_DATA := $(HOTPLUGPATH)
-
-genpath-target = $(call buildmakevars2file,$(HOTPLUGPATH))
-$(eval $(genpath-target))
+XEN_SCRIPT_DATA := hotplugpath.sh
 
 .PHONY: all
-all: $(HOTPLUGPATH)
+all:
 
 .PHONY: install
 install: install-scripts
@@ -40,7 +35,6 @@ uninstall-scripts:
 
 .PHONY: clean
 clean:
-	rm -f $(HOTPLUGPATH)
 
 .PHONY: distclean
 distclean: clean
diff --git a/tools/hotplug/common/hotplugpath.sh.in b/tools/hotplug/common/hotplugpath.sh.in
new file mode 100644
index 0000000000..1036b884b8
--- /dev/null
+++ b/tools/hotplug/common/hotplugpath.sh.in
@@ -0,0 +1,16 @@
+sbindir="@sbindir@"
+bindir="@bindir@"
+LIBEXEC="@LIBEXEC@"
+LIBEXEC_BIN="@LIBEXEC_BIN@"
+libdir="@libdir@"
+SHAREDIR="@SHAREDIR@"
+XENFIRMWAREDIR="@XENFIRMWAREDIR@"
+XEN_CONFIG_DIR="@XEN_CONFIG_DIR@"
+XEN_SCRIPT_DIR="@XEN_SCRIPT_DIR@"
+XEN_LOCK_DIR="@XEN_LOCK_DIR@"
+XEN_RUN_DIR="@XEN_RUN_DIR@"
+XEN_PAGING_DIR="@XEN_PAGING_DIR@"
+XEN_DUMP_DIR="@XEN_DUMP_DIR@"
+XEN_LOG_DIR="@XEN_LOG_DIR@"
+XEN_LIB_DIR="@XEN_LIB_DIR@"
+XEN_RUN_STORED="@XEN_RUN_STORED@"
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:11:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423103.669553 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1X-0005yN-3J; Fri, 14 Oct 2022 20:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423103.669553; Fri, 14 Oct 2022 20:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1X-0005yG-0p; Fri, 14 Oct 2022 20:11:15 +0000
Received: by outflank-mailman (input) for mailman id 423103;
 Fri, 14 Oct 2022 20:11:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1W-0005y6-Da
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1W-0000g1-CR
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1W-0004Cq-BO
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+7aON2NLBAPHFAyrAnSbVyxEyDJBPPSaOkrpOSIzfp8=; b=zJMx/OO3z+2aQQ4wreVdp/wdh5
	00XPboe6vffIhvGmgxDRTwhybZKFWQ9HilzxkJ5BC4x9MZ7i0M1pDVND2WS8vPTesRyzWYlKohSg0
	DHH56UXdjVnokOL7EN9ME5qffD24/Xjpf4ga5HZuaxwCv9I/2LrijaY4XPazUidZTXkg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] libs/light/gentypes.py: allow to generate headers in subdirectory
Message-Id: <E1ojR1W-0004Cq-BO@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:11:14 +0000

commit 4c1a3cca790f0a11d3d803f0406845f46a50d177
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:03 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs/light/gentypes.py: allow to generate headers in subdirectory
    
    This doesn't matter yet but it will when for example the script will
    be run from tools/ to generate files tools/libs/light/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/gentypes.py | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/gentypes.py b/tools/libs/light/gentypes.py
index 9a45e45acc..3fe3873242 100644
--- a/tools/libs/light/gentypes.py
+++ b/tools/libs/light/gentypes.py
@@ -584,6 +584,9 @@ def libxl_C_enum_from_string(ty, str, e, indent = "    "):
         s = indent + s
     return s.replace("\n", "\n%s" % indent).rstrip(indent)
 
+def clean_header_define(header_path):
+    return header_path.split('/')[-1].upper().replace('.','_')
+
 
 if __name__ == '__main__':
     if len(sys.argv) != 6:
@@ -598,7 +601,7 @@ if __name__ == '__main__':
 
     f = open(header, "w")
 
-    header_define = header.upper().replace('.','_')
+    header_define = clean_header_define(header)
     f.write("""#ifndef %s
 #define %s
 
@@ -648,7 +651,7 @@ if __name__ == '__main__':
 
     f = open(header_json, "w")
 
-    header_json_define = header_json.upper().replace('.','_')
+    header_json_define = clean_header_define(header_json)
     f.write("""#ifndef %s
 #define %s
 
@@ -672,7 +675,7 @@ if __name__ == '__main__':
 
     f = open(header_private, "w")
 
-    header_private_define = header_private.upper().replace('.','_')
+    header_private_define = clean_header_define(header_private)
     f.write("""#ifndef %s
 #define %s
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:11:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423104.669557 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1h-000643-4s; Fri, 14 Oct 2022 20:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423104.669557; Fri, 14 Oct 2022 20:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1h-00063w-2O; Fri, 14 Oct 2022 20:11:25 +0000
Received: by outflank-mailman (input) for mailman id 423104;
 Fri, 14 Oct 2022 20:11:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1g-00063k-M4
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1g-0000gB-FW
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1g-0004DG-EW
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rz49L0XfxWcieKeYn+MszCZ9R3tr5z1/C3H+UlDCO1Q=; b=YJX5gzDLGra2JcljdKchaynjKJ
	zXwjEnA10V/uOd/INB5nu2gdUDvAbM7axS9tZt94FJC1/MhewVe8WTf4lHaQptIOnkegWWJV9i4LU
	dtuaN7wIqxKmxkbKUb9Q/AK27fziafT6aLgvYqjtXY0h/+05K2DTMe+SdK+xlzyXmCKc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] git-checkout.sh: handle running git-checkout from a different directory
Message-Id: <E1ojR1g-0004DG-EW@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:11:24 +0000

commit 4834dd5521a36cec118ed84b7c09a509edaafa6b
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:04 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    git-checkout.sh: handle running git-checkout from a different directory
    
    "$DIR" might not be a full path and it might not have `pwd` as ".."
    directory. So use `cd -` to undo the first `cd` command.
    
    Also, use `basename` to make a symbolic link with a relative path.
    
    This doesn't matter yet but it will when for example the commands to
    clone OVMF is been run from tools/ rather than tools/firmware/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 scripts/git-checkout.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/git-checkout.sh b/scripts/git-checkout.sh
index 20ae31ff23..fd4425ac4e 100755
--- a/scripts/git-checkout.sh
+++ b/scripts/git-checkout.sh
@@ -19,9 +19,9 @@ if test \! -d $DIR-remote; then
 		cd $DIR-remote.tmp
 		$GIT branch -D dummy >/dev/null 2>&1 ||:
 		$GIT checkout -b dummy $TAG
-		cd ..
+		cd -
 	fi
 	mv $DIR-remote.tmp $DIR-remote
 fi
 rm -f $DIR
-ln -sf $DIR-remote $DIR
+ln -sf $(basename $DIR-remote) $DIR
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:11:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:11:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423105.669561 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1r-000675-6d; Fri, 14 Oct 2022 20:11:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423105.669561; Fri, 14 Oct 2022 20:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR1r-00066y-42; Fri, 14 Oct 2022 20:11:35 +0000
Received: by outflank-mailman (input) for mailman id 423105;
 Fri, 14 Oct 2022 20:11:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1q-00066r-JJ
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1q-0000gP-IS
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR1q-0004Dh-HX
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ZVXi2kIckjlPtKdY6f1488i0+0hUnui81p+L6uqPJiE=; b=vI98VFwfzqZit6+23Tw6HZXoxP
	tmLlIy4f25CVMhkvJPa3ESPwZgvWH9yrdeqgcJ3bhRcZKB74KpWdnf/7lBqN3jixoiHMULx3zAG3P
	TYYVu9HSlWLxp7dYt+grneCFczUBkfafiIyylUzM5xa40N76bLdd9C7MdmgrHGXZyicc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] libs: Avoid exposing -Wl,--version-script to other built library
Message-Id: <E1ojR1q-0004Dh-HX@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:11:34 +0000

commit 13c05b9efa2b825935ff9215575b53c1f9ad7965
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:05 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs: Avoid exposing -Wl,--version-script to other built library
    
    $(SHLIB_LDFLAGS) is used by more targets that the single targets that
    except it (libxenfoo.so.X.Y). There is also some dynamic libraries in
    stats/ that uses $(SHLIB_LDFLAGS) (even if those are never built), and
    there's libxenlight_test.so which doesn't needs a version script.
    
    Also, libxenlight_test.so might failed to build if the version script
    doesn't exist yet.
    
    For these reasons, avoid changing the generic $(SHLIB_LDFLAGS) flags,
    and add the flag directly on the command line.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/libs.mk | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index e47fb30ed4..3eb91fc8f3 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -12,8 +12,6 @@ MAJOR := $(shell $(XEN_ROOT)/version.sh $(XEN_ROOT)/xen/Makefile)
 endif
 MINOR ?= 0
 
-SHLIB_LDFLAGS += -Wl,--version-script=libxen$(LIBNAME).map
-
 CFLAGS   += -Wmissing-prototypes
 CFLAGS   += $(CFLAGS_xeninclude)
 CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
@@ -85,7 +83,7 @@ lib$(LIB_FILE_NAME).so.$(MAJOR): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)
 	$(SYMLINK_SHLIB) $< $@
 
 lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR): $(PIC_OBJS) libxen$(LIBNAME).map
-	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) -Wl,--version-script=libxen$(LIBNAME).map $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
 
 # If abi-dumper is available, write out the ABI analysis
 ifneq ($(ABI_DUMPER),)
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:11:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423106.669566 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR21-00069u-87; Fri, 14 Oct 2022 20:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423106.669566; Fri, 14 Oct 2022 20:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR21-00069n-5b; Fri, 14 Oct 2022 20:11:45 +0000
Received: by outflank-mailman (input) for mailman id 423106;
 Fri, 14 Oct 2022 20:11:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR20-00069b-M3
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR20-0000i7-LM
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR20-0004E9-KW
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=8manTUV9IYO9+vXbxsIx/3NKuQcOIp67/C/e+OYzsWI=; b=MnbUw0aMtxmiI3nz1V+RpR8Xjw
	a/l/G3qHxfU/EJSzSJYp+pC52apg8S7ki4deAdlqHAO9/lfSvnHQIVSwkici6V8hqJl1BJMYzX38H
	Cibh0T8sVOqcac3Gl2HZ+GYKV1iQ+fNll2lCciKgtntFOvNaHpOvFzV/+P1HnYSc05ng=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/include: Rework Makefile
Message-Id: <E1ojR20-0004E9-KW@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:11:44 +0000

commit 6aabee32b572216ecb7292d26f99e1a3b49b6524
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:07 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools/include: Rework Makefile
    
    Rework "xen-xsm" rules to not have to change directory to run
    mkflask.sh, and store mkflask.sh path in a var, and use a full path
    for FLASK_H_DEPEND, and output directory is made relative.
    
    Rename "all-y" target to a more descriptive "xen/lib/x86/all".
    
    Removed the "dist" target which was the only one existing in tools/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/include/Makefile | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/tools/include/Makefile b/tools/include/Makefile
index b488f7ca9f..81c3d09039 100644
--- a/tools/include/Makefile
+++ b/tools/include/Makefile
@@ -7,17 +7,20 @@ include $(XEN_ROOT)/tools/Rules.mk
 # taken into account, i.e. there should be no rules added here for generating
 # any tools/include/*.h files.
 
-# Relative to $(XEN_ROOT)/xen/xsm/flask
-FLASK_H_DEPEND := policy/initial_sids
+.PHONY: all
+all: xen-foreign xen-dir xen-xsm/.dir
+ifeq ($(CONFIG_X86),y)
+all: xen/lib/x86/all
+endif
 
-.PHONY: all all-y build xen-dir
-all build: all-y xen-foreign xen-dir xen-xsm/.dir
-all-y:
+.PHONY: build
+build: all
 
 .PHONY: xen-foreign
 xen-foreign:
 	$(MAKE) -C xen-foreign
 
+.PHONY: xen-dir
 xen-dir:
 	mkdir -p xen/libelf acpi
 	find xen/ acpi/ -type l -exec rm '{}' +
@@ -36,16 +39,18 @@ ifeq ($(CONFIG_X86),y)
 	ln -s $(XEN_ROOT)/xen/include/xen/lib/x86/Makefile xen/lib/x86/
 endif
 
-all-$(CONFIG_X86): xen-dir
+.PHONY: xen/lib/x86/all
+xen/lib/x86/all: xen-dir
 	$(MAKE) -C xen/lib/x86 all XEN_ROOT=$(XEN_ROOT) PYTHON=$(PYTHON)
 
+MKFLASK := $(XEN_ROOT)/xen/xsm/flask/policy/mkflask.sh
+FLASK_H_DEPEND := $(XEN_ROOT)/xen/xsm/flask/policy/initial_sids
+
 # Not xen/xsm as that clashes with link to
 # $(XEN_ROOT)/xen/include/public/xsm above.
-xen-xsm/.dir: $(XEN_ROOT)/xen/xsm/flask/policy/mkflask.sh \
-	      $(patsubst %,$(XEN_ROOT)/xen/xsm/flask/%,$(FLASK_H_DEPEND))
+xen-xsm/.dir: $(MKFLASK) $(FLASK_H_DEPEND)
 	mkdir -p xen-xsm/flask
-	cd $(XEN_ROOT)/xen/xsm/flask/ && \
-		$(SHELL) policy/mkflask.sh $(AWK) $(CURDIR)/xen-xsm/flask $(FLASK_H_DEPEND)
+	$(SHELL) $(MKFLASK) $(AWK) xen-xsm/flask $(FLASK_H_DEPEND)
 	touch $@
 
 .PHONY: install
@@ -84,8 +89,5 @@ clean:
 	$(MAKE) -C xen-foreign clean
 	rm -f _*.h
 
-.PHONY: dist
-dist: install
-
 .PHONY: distclean
 distclean: clean
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:11:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423107.669570 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2B-0006Co-9l; Fri, 14 Oct 2022 20:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423107.669570; Fri, 14 Oct 2022 20:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2B-0006Cg-73; Fri, 14 Oct 2022 20:11:55 +0000
Received: by outflank-mailman (input) for mailman id 423107;
 Fri, 14 Oct 2022 20:11:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2A-0006CW-P2
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2A-0000iH-OE
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2A-0004Eg-NO
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:11:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=CcY5OR/8eJ5Qdl1Di7aSylqCso6ifEA5kUjycKXf68Q=; b=Adie5oOeTjA5AqJFPcan0riIWD
	G5dX49qZaIOSkRrPzUy/fL8hvzHAdgZJ4JhBiJ7m0N/unoQeBqG+/2GXnXLhFblVZtqMRql2qyiS4
	gP4SqEX7tLIn6qGJkuSZ2PJtXW1hu9iZ9u/kj7+S3pN35z6at+6cO3LA9M9btpd+bW4c=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] libs/light: Rework acpi table build targets
Message-Id: <E1ojR2A-0004Eg-NO@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:11:54 +0000

commit 9eb46d3f9808417ee84a38778d808d34058fb546
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:08 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs/light: Rework acpi table build targets
    
    Currently, a rebuild of libxl will always rebuild "build.o". This is because
    the target depends on "acpi" which never exist. So instead we will have
    "build.o" have as prerequisites targets that are actually generated by "acpi",
    that is $(DSDT_FILES-y).
    
    While "dsdt_*.c" isn't really a dependency for "build.o", a side
    effect of building that dsdt_*.c is to also generate the "ssdt_*.h"
    that "build.o" needs, but I don't want to list all the headers needed
    by "build.o" and duplicate the information available in
    "libacpi/Makefile" at this time.
    
    Also avoid duplicating the "acpi" target for Arm, and unique one for
    both architecture. And move the "acpi" target to be with other targets
    rather than in the middle of the source listing. For the same reason,
    move the prerequisites listing for both $(DSDT_FILES-y) and "build.o".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/Makefile | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 13545654c2..d84e5f3cd9 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -32,14 +32,10 @@ ACPI_PATH  = $(XEN_ROOT)/tools/libacpi
 DSDT_FILES-$(CONFIG_X86) = dsdt_pvh.c
 ACPI_OBJS  = $(patsubst %.c,%.o,$(DSDT_FILES-y)) build.o static_tables.o
 ACPI_PIC_OBJS = $(patsubst %.o,%.opic,$(ACPI_OBJS))
-$(DSDT_FILES-y) build.o build.opic: acpi
+
 vpath build.c $(ACPI_PATH)/
 vpath static_tables.c $(ACPI_PATH)/
 
-.PHONY: acpi
-acpi:
-	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) DSDT_FILES="$(DSDT_FILES-y)"
-
 OBJS-$(CONFIG_X86) += $(ACPI_OBJS)
 
 CFLAGS += -Wno-format-zero-length -Wmissing-declarations \
@@ -58,8 +54,6 @@ ifeq ($(CONFIG_ARM_64),y)
 DSDT_FILES-y = dsdt_anycpu_arm.c
 OBJS-y += libxl_arm_acpi.o
 OBJS-y += $(DSDT_FILES-y:.c=.o)
-dsdt_anycpu_arm.c:
-	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) DSDT_FILES="$(DSDT_FILES-y)"
 else
 OBJS-$(CONFIG_ARM) += libxl_arm_no_acpi.o
 endif
@@ -191,6 +185,12 @@ all: $(CLIENTS) $(TEST_PROGS) $(AUTOSRCS) $(AUTOINCS)
 
 $(OBJS-y) $(PIC_OBJS) $(SAVE_HELPER_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): $(AUTOINCS) libxl.api-ok
 
+$(DSDT_FILES-y): acpi
+
+# Depend on the source files generated by the "acpi" target even though
+# "build.o" don't needs them.  It does need the generated headers.
+build.o build.opic: $(DSDT_FILES-y)
+
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
 	touch $@
@@ -227,6 +227,10 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 $(XEN_INCLUDE)/_%.h: _%.h
 	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)
 
+.PHONY: acpi
+acpi:
+	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) DSDT_FILES="$(DSDT_FILES-y)"
+
 libxenlight_test.so: $(PIC_OBJS) $(LIBXL_TEST_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDLIBS) $(APPEND_LDFLAGS)
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:12:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423108.669573 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2M-0006Hc-BV; Fri, 14 Oct 2022 20:12:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423108.669573; Fri, 14 Oct 2022 20:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2M-0006HV-8e; Fri, 14 Oct 2022 20:12:06 +0000
Received: by outflank-mailman (input) for mailman id 423108;
 Fri, 14 Oct 2022 20:12:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2K-0006H6-Ru
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2K-0000iw-RA
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2K-0004Fs-QJ
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=u50Qwp97MaD37Q6xCihfrSCsWnsQsuME0zSFhCcpFCg=; b=4ViY/LaE/5++sLTV85RoTY2bLv
	wupJmwrF7se3b9he5rZCzPX+nGZByhJ+ez/sNVoQr/0FsT7eb2r2vjlzACWPBGdGChA8ey370eqoR
	tJz4oZIpGRU4THZY3aco45tx0MrdzXKFWuHX2JL+Ql0zOv+gwA+tEmfHrcMqeTMEels8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] libs/light: Rework generation of include/_libxl_*.h
Message-Id: <E1ojR2K-0004Fs-QJ@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:12:04 +0000

commit 68d19cfb90a5bb6257e03be3f21c912bac7ec49b
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:09 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs/light: Rework generation of include/_libxl_*.h
    
    Instead of moving the public "_libxl_*.h" headers, we make a copy to
    the destination so that make doesn't try to remake the targets
    "_libxl_*.h" in libs/light/ again.
    
    A new .PRECIOUS target is added to tell make to not deletes the
    intermediate targets generated by "gentypes.py".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/Makefile | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index d84e5f3cd9..d681269229 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -215,6 +215,8 @@ libxl_internal_json.h: _libxl_types_internal_json.h
 $(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): $(XEN_INCLUDE)/libxl.h
 $(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
+# This exploits the 'multi-target pattern rule' trick.
+# gentypes.py should be executed only once to make all the targets.
 _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
 	$(eval stem = $(notdir $*))
 	$(PYTHON) gentypes.py libxl_type$(stem).idl __libxl_type$(stem).h __libxl_type$(stem)_private.h \
@@ -224,8 +226,10 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 	$(call move-if-changed,__libxl_type$(stem)_json.h,_libxl_type$(stem)_json.h)
 	$(call move-if-changed,__libxl_type$(stem).c,_libxl_type$(stem).c)
 
-$(XEN_INCLUDE)/_%.h: _%.h
-	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)
+.PRECIOUS: _libxl_type%.h _libxl_type%.c
+
+$(XEN_INCLUDE)/_libxl_%.h: _libxl_%.h
+	cp -f $< $@
 
 .PHONY: acpi
 acpi:
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:12:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423109.669579 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2W-0006Lz-F4; Fri, 14 Oct 2022 20:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423109.669579; Fri, 14 Oct 2022 20:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2W-0006Lq-Bn; Fri, 14 Oct 2022 20:12:16 +0000
Received: by outflank-mailman (input) for mailman id 423109;
 Fri, 14 Oct 2022 20:12:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2U-0006LX-Uw
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2U-0000jI-UC
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2U-0004H2-TG
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=PPcrxNw5wSlJiEI8/nBQM7Gkmsos/7Cr5gYK9+HpvdY=; b=Ji5GrNgpX6I9Wr6gaZolBpE8/6
	/F7eAtPj8yt2+GKWP1/DZ0rqS1IA7aBPvvtFCFMX/EQUM9fNvH8ay+Es2FVLWIn8NaiVjnMp1aduZ
	UysbY+zSB25C6DC7lom7iDb9C1fUFAXYX6BMjnYFzD5buZ9CmMX4Lc64Xt7dvoqHRZxA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/golang/xenlight: Rework gengotypes.py and generation of *.gen.go
Message-Id: <E1ojR2U-0004H2-TG@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:12:14 +0000

commit 3f9d53af25dc7f0a9b05e3497822f1eeb47589d9
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:12 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools/golang/xenlight: Rework gengotypes.py and generation of *.gen.go
    
    gengotypes.py creates both "types.gen.go" and "helpers.gen.go", but
    make can start gengotypes.py twice. Rework the rules so that
    gengotypes.py is executed only once.
    
    Also, add the ability to provide a path to tell gengotypes.py where to
    put the files. This doesn't matter yet but it will when for example
    the script will be run from tools/ to generate the targets.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/golang/xenlight/Makefile      |  6 ++++--
 tools/golang/xenlight/gengotypes.py | 12 +++++++++++-
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 00e6d17f2b..c5bb6b94a8 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -15,8 +15,10 @@ all: build
 
 GOXL_GEN_FILES = types.gen.go helpers.gen.go
 
-%.gen.go: gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
-	LIBXL_SRC_DIR=$(LIBXL_SRC_DIR) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl
+# This exploits the 'multi-target pattern rule' trick.
+# gentypes.py should be executed only once to make all the targets.
+$(subst .gen.,.%.,$(GOXL_GEN_FILES)): gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
+	LIBXL_SRC_DIR=$(LIBXL_SRC_DIR) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(@D)/types.gen.go $(@D)/helpers.gen.go
 
 # Go will do its own dependency checking, and not actuall go through
 # with the build if none of the input files have changed.
diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index ac1cf060dd..9fec60602d 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -1,5 +1,7 @@
 #!/usr/bin/python
 
+from __future__ import print_function
+
 import os
 import sys
 
@@ -723,7 +725,13 @@ def xenlight_golang_fmt_name(name, exported = True):
     return words[0] + ''.join(x.title() for x in words[1:])
 
 if __name__ == '__main__':
+    if len(sys.argv) != 4:
+        print("Usage: gengotypes.py <idl> <types.gen.go> <helpers.gen.go>", file=sys.stderr)
+        sys.exit(1)
+
     idlname = sys.argv[1]
+    path_types = sys.argv[2]
+    path_helpers = sys.argv[3]
 
     (builtins, types) = idl.parse(idlname)
 
@@ -735,9 +743,11 @@ if __name__ == '__main__':
 // source: {}
 
 """.format(os.path.basename(sys.argv[0]),
-           ' '.join([os.path.basename(a) for a in sys.argv[1:]]))
+           os.path.basename(sys.argv[1]))
 
     xenlight_golang_generate_types(types=types,
+                                   path=path_types,
                                    comment=header_comment)
     xenlight_golang_generate_helpers(types=types,
+                                     path=path_helpers,
                                      comment=header_comment)
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 14 20:12:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Oct 2022 20:12:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.423111.669581 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2g-0006Oz-Fp; Fri, 14 Oct 2022 20:12:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 423111.669581; Fri, 14 Oct 2022 20:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ojR2g-0006Os-DI; Fri, 14 Oct 2022 20:12:26 +0000
Received: by outflank-mailman (input) for mailman id 423111;
 Fri, 14 Oct 2022 20:12:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2f-0006Od-1X
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2f-0000jV-0n
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ojR2f-0004Hq-04
 for xen-changelog@lists.xenproject.org; Fri, 14 Oct 2022 20:12:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=0dyEaCsEiyIDWgKyIHFty6CwidDVxgbxjWhfSUu6UjE=; b=T2pFBVl8YnQqiUhj8hfYBNBAb8
	uLGQD+B2XTI3WCFkrYkVHaAY5x9+bhjEMlRVYEZxUITzXzBQOQ+Oos1/rqL6O7+2syY29qcmKbP/A
	Cmu16HME22jjOGMTyZfjDeY0Kpos/YXQDi9iZJI9GK86CtrGz0xriQEQfdGldmjxrK6o=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools: Rework linking options for ocaml binding libraries
Message-Id: <E1ojR2f-0004Hq-04@xenbits.xenproject.org>
Date: Fri, 14 Oct 2022 20:12:25 +0000

commit 5310a3aa5026fb27d6834306d920d6207a1e0898
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:13 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools: Rework linking options for ocaml binding libraries
    
    Using a full path to the C libraries when preparing one of the ocaml
    binding for those libraries make the binding unusable by external
    project. The full path is somehow embedded and reused by the external
    project when linking against the binding.
    
    Instead, we will use the proper way to link a library, by using '-l'.
    For in-tree build, we also need to provide the search directory via
    '-L'.
    
    (The search path -L are still be embedded, but at least that doesn't
    prevent the ocaml binding from been used.)
    
    Related-to: xen-project/xen#96
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/Rules.mk                       | 8 ++++++++
 tools/ocaml/libs/eventchn/Makefile   | 2 +-
 tools/ocaml/libs/xc/Makefile         | 2 +-
 tools/ocaml/libs/xentoollog/Makefile | 2 +-
 tools/ocaml/libs/xl/Makefile         | 2 +-
 5 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index a165dc4bda..34d495fff7 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -113,6 +113,14 @@ define xenlibs-ldflags
     $(foreach lib,$(1),-L$(XEN_ROOT)/tools/libs/$(lib))
 endef
 
+# Flags for linking against all Xen libraries listed in $(1) but by making use
+# of -L and -l instead of providing a path to the shared library.
+define xenlibs-ldflags-ldlibs
+    $(call xenlibs-ldflags,$(1)) \
+    $(foreach lib,$(1), -l$(FILENAME_$(lib))) \
+    $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
+endef
+
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
diff --git a/tools/ocaml/libs/eventchn/Makefile b/tools/ocaml/libs/eventchn/Makefile
index 7362a28d9e..dc560ba49b 100644
--- a/tools/ocaml/libs/eventchn/Makefile
+++ b/tools/ocaml/libs/eventchn/Makefile
@@ -8,7 +8,7 @@ OBJS = xeneventchn
 INTF = $(foreach obj, $(OBJS),$(obj).cmi)
 LIBS = xeneventchn.cma xeneventchn.cmxa
 
-LIBS_xeneventchn = $(LDLIBS_libxenevtchn)
+LIBS_xeneventchn = $(call xenlibs-ldflags-ldlibs,evtchn)
 
 all: $(INTF) $(LIBS) $(PROGRAMS)
 
diff --git a/tools/ocaml/libs/xc/Makefile b/tools/ocaml/libs/xc/Makefile
index 67acc46bee..3b76e9ad7b 100644
--- a/tools/ocaml/libs/xc/Makefile
+++ b/tools/ocaml/libs/xc/Makefile
@@ -10,7 +10,7 @@ OBJS = xenctrl
 INTF = xenctrl.cmi
 LIBS = xenctrl.cma xenctrl.cmxa
 
-LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
+LIBS_xenctrl = $(call xenlibs-ldflags-ldlibs,ctrl guest)
 
 xenctrl_OBJS = $(OBJS)
 xenctrl_C_OBJS = xenctrl_stubs
diff --git a/tools/ocaml/libs/xentoollog/Makefile b/tools/ocaml/libs/xentoollog/Makefile
index 9ede2fd124..1645b40faf 100644
--- a/tools/ocaml/libs/xentoollog/Makefile
+++ b/tools/ocaml/libs/xentoollog/Makefile
@@ -13,7 +13,7 @@ OBJS = xentoollog
 INTF = xentoollog.cmi
 LIBS = xentoollog.cma xentoollog.cmxa
 
-LIBS_xentoollog = $(LDLIBS_libxentoollog)
+LIBS_xentoollog = $(call xenlibs-ldflags-ldlibs,toollog)
 
 xentoollog_OBJS = $(OBJS)
 xentoollog_C_OBJS = xentoollog_stubs
diff --git a/tools/ocaml/libs/xl/Makefile b/tools/ocaml/libs/xl/Makefile
index 7c1c4edced..22d6c93aae 100644
--- a/tools/ocaml/libs/xl/Makefile
+++ b/tools/ocaml/libs/xl/Makefile
@@ -15,7 +15,7 @@ LIBS = xenlight.cma xenlight.cmxa
 
 OCAMLINCLUDE += -I ../xentoollog
 
-LIBS_xenlight = $(LDLIBS_libxenlight)
+LIBS_xenlight = $(call xenlibs-ldflags-ldlibs,light)
 
 xenlight_OBJS = $(OBJS)
 xenlight_C_OBJS = xenlight_stubs
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Mon Oct 17 14:11:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Oct 2022 14:11:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.424436.671845 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1okQpe-00067k-GL; Mon, 17 Oct 2022 14:11:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 424436.671845; Mon, 17 Oct 2022 14:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1okQpe-00067c-Df; Mon, 17 Oct 2022 14:11:06 +0000
Received: by outflank-mailman (input) for mailman id 424436;
 Mon, 17 Oct 2022 14:11:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1okQpc-00067W-MY
 for xen-changelog@lists.xenproject.org; Mon, 17 Oct 2022 14:11:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1okQpc-0000Bv-Ku
 for xen-changelog@lists.xenproject.org; Mon, 17 Oct 2022 14:11:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1okQpc-00010r-J2
 for xen-changelog@lists.xenproject.org; Mon, 17 Oct 2022 14:11:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=FxVaaCERE5Mi6fNRErv/eJ0qH0eeRMOswWQiA7soS4o=; b=YlbgZDCpTHwauF18M9H4kPtgET
	9rTVm7K4igtpCG+SXL6NM7H9cWfmevcGl1c7k0AlOBVwUEaPgiTWtHZwQngI2k9Xg0IxbE/1IDVnL
	AhS1IwqG7qynkSNxT1OVIl15adXTZJP3qyEADTLTXRtuU3TqjYGCAgEGyr/7tALXAWPM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools: Workaround wrong use of tools/Rules.mk by qemu-trad
Message-Id: <E1okQpc-00010r-J2@xenbits.xenproject.org>
Date: Mon, 17 Oct 2022 14:11:04 +0000

commit cc4747be8ba157a3b310921e9ee07fb8545aa206
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Mon Oct 17 11:34:03 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Mon Oct 17 14:57:34 2022 +0100

    tools: Workaround wrong use of tools/Rules.mk by qemu-trad
    
    qemu-trad build system, when built from xen.git, will make use of
    Rules.mk (setup via qemu-trad.git/xen-setup). This mean that changes
    to Rules.mk will have an impact our ability to build qemu-trad.
    
    Recent commit e4f5949c4466 ("tools: Add -Werror by default to all
    tools/") have added "-Werror" to the CFLAGS and qemu-trad start to use
    it. But this fails and there's lots of warning that are now turned
    into error.
    
    We should teach qemu-trad and xen.git to not have to use Rules.mk when
    building qemu-trad, but for now, avoid adding -Werror to CFLAGS when
    building qemu-trad.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/Makefile | 1 +
 tools/Rules.mk | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/tools/Makefile b/tools/Makefile
index 0c1d8b64a4..9e28027835 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -159,6 +159,7 @@ qemu-traditional-recurse = \
 	set -e; \
 		$(buildmakevars2shellvars); \
 		export CONFIG_BLKTAP1=n; \
+		export BUILDING_QEMU_TRAD=y; \
 		cd qemu-xen-traditional-dir; \
 		$(1)
 
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 34d495fff7..6e135387bd 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -141,9 +141,12 @@ endif
 
 CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
 
+# Don't add -Werror if we are used by qemu-trad build system.
+ifndef BUILDING_QEMU_TRAD
 ifeq ($(CONFIG_WERROR),y)
 CFLAGS += -Werror
 endif
+endif
 
 ifeq ($(debug),y)
 # Use -Og if available, -O0 otherwise
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 08:44:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 08:44:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426297.674630 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olR9q-0002Mk-Au; Thu, 20 Oct 2022 08:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426297.674630; Thu, 20 Oct 2022 08:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olR9q-0002Md-8U; Thu, 20 Oct 2022 08:44:06 +0000
Received: by outflank-mailman (input) for mailman id 426297;
 Thu, 20 Oct 2022 08:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olR9o-0002MX-Ll
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 08:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olR9o-0002tu-Hj
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 08:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olR9o-0008J0-Gf
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 08:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=VJEV4ff6Ck6eynOOwZhp4OtkgHVkX8t4xq7xDqDLNEo=; b=2VsFK1S7VUlaalmkkQe3F/ZEgK
	6zLPwXirvAGtBbeTZJBCndcGD1j7gKYuzYoqhZZtTP7gvHyH3JwGR4ioAXcGWIkMYzswEjPMBmiZb
	uv+Kbw73O7k3qHIeQ2SMaLl3fza0PWAjseg2yCHR98s8BiXe/NxbIXgSmoGwCMzaX+to=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] arm/p2m: Rework p2m_init()
Message-Id: <E1olR9o-0008J0-Gf@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 08:44:04 +0000

commit 3783e583319fa1ce75e414d851f0fde191a14753
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 18 14:23:45 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Thu Oct 20 09:39:56 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f17500ddf3..6826f63150 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1754,7 +1754,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1763,11 +1763,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1783,8 +1778,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1797,13 +1790,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 08:44:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 08:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426298.674634 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olRA0-0002Oa-CR; Thu, 20 Oct 2022 08:44:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426298.674634; Thu, 20 Oct 2022 08:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olRA0-0002OT-9w; Thu, 20 Oct 2022 08:44:16 +0000
Received: by outflank-mailman (input) for mailman id 426298;
 Thu, 20 Oct 2022 08:44:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olR9y-0002OC-M1
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 08:44:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olR9y-0002ty-LD
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 08:44:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olR9y-0008JY-K9
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 08:44:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=3P9yGfa3EpkPj7NvoPERYT4nQJArgClkH/CixRf7TAU=; b=rLe4T/3Qt9842FAQoQ5/FsG1nr
	+t9upuJUGTB1EHa49ikFrZpjditJBIdwZBoYz3zFbg7W8zjafB8k/9eZCk5mnssIxuhvcaPZ0oXpV
	ES9bNDpkdaJEnrdlDI62GsCFIPhSXXX1YzreryNpG9ZHM4KBAjzdGVAGOUUPD1IcJtso=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1olR9y-0008JY-K9@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 08:44:14 +0000

commit c7cff1188802646eaa38e918e5738da0e84949be
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 18 14:23:46 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Thu Oct 20 09:40:10 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Release-acked-by: George Dunlap <george.dunlap@citrix.com>
---
 xen/arch/arm/domain.c          |  2 +-
 xen/arch/arm/include/asm/p2m.h | 14 ++++++++++----
 xen/arch/arm/p2m.c             | 34 ++++++++++++++++++++++++++++++++--
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2c84e6dbbb..38e22f12af 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1064,7 +1064,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 42bfd548c4..c8f14d13c2 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -194,14 +194,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -266,6 +270,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6826f63150..00d05bb708 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1685,7 +1685,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1693,6 +1693,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1716,7 +1719,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1736,7 +1739,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1803,6 +1819,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 14:00:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 14:00:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426828.675535 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olW5d-00036K-8i; Thu, 20 Oct 2022 14:00:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426828.675535; Thu, 20 Oct 2022 14:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olW5d-00036C-5b; Thu, 20 Oct 2022 14:00:05 +0000
Received: by outflank-mailman (input) for mailman id 426828;
 Thu, 20 Oct 2022 14:00:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olW5c-000324-QP
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:00:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olW5c-0000i6-N5
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:00:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olW5c-00081I-M7
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:00:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=DYelt7NLJhP4sqJfAcs0fQ8joaAo8uNKkBarSGJWVVA=; b=Mr9DNC6pDio7FrBeaLAsmhTXtU
	bAZR/wJxbKju1dkXGeyLk50RdX8JrhEtAtNCqiQrOsLI2WUfseeKO7W5COb0IwCW8kAvR82pmsLri
	AxnYGYJchtkUnoLEknMXj+N9IV09qDXnekC57oB5NZshG13xPL00X9xqxSldbN9te0tM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [qemu-xen staging-4.16] ebpf: replace deprecated bpf_program__set_socket_filter
Message-Id: <E1olW5c-00081I-M7@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 14:00:04 +0000

commit 62dd49f2172fb7dfe8d4223bfa45aede05155328
Author:     Haochen Tong <i@hexchain.org>
AuthorDate: Sat May 28 03:06:58 2022 +0800
Commit:     Anthony PERARD <anthony.perard@gmail.com>
CommitDate: Thu Oct 20 14:39:06 2022 +0100

    ebpf: replace deprecated bpf_program__set_socket_filter
    
    bpf_program__set_<TYPE> functions have been deprecated since libbpf 0.8.
    Replace with the equivalent bpf_program__set_type call to avoid a
    deprecation warning.
    
    Signed-off-by: Haochen Tong <i@hexchain.org>
    Reviewed-by: Zhang Chen <chen.zhang@intel.com>
    Signed-off-by: Jason Wang <jasowang@redhat.com>
    (cherry picked from commit a495eba03c31c96d6a0817b13598ce2219326691)
---
 ebpf/ebpf_rss.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ebpf/ebpf_rss.c b/ebpf/ebpf_rss.c
index 118c68da83..cee658c158 100644
--- a/ebpf/ebpf_rss.c
+++ b/ebpf/ebpf_rss.c
@@ -49,7 +49,7 @@ bool ebpf_rss_load(struct EBPFRSSContext *ctx)
         goto error;
     }
 
-    bpf_program__set_socket_filter(rss_bpf_ctx->progs.tun_rss_steering_prog);
+    bpf_program__set_type(rss_bpf_ctx->progs.tun_rss_steering_prog, BPF_PROG_TYPE_SOCKET_FILTER);
 
     if (rss_bpf__load(rss_bpf_ctx)) {
         trace_ebpf_error("eBPF RSS", "can not load RSS program");
--
generated by git-patchbot for /home/xen/git/qemu-xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 14:44:09 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 14:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426873.675607 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWmD-0002SD-3C; Thu, 20 Oct 2022 14:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426873.675607; Thu, 20 Oct 2022 14:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWmD-0002S5-0Z; Thu, 20 Oct 2022 14:44:05 +0000
Received: by outflank-mailman (input) for mailman id 426873;
 Thu, 20 Oct 2022 14:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmC-0002Rz-8f
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmC-0001Us-7l
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmC-0001i2-6c
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=fKfD4ZaJj0gm8zLzrBumZBdB7hmHyqt3X+TYhqri1Gk=; b=TehxBVg06WpcwNSBXOyvp9DEfW
	P53kUq7WOxkqwIwN5tsg8Q1Bgme4XGdznugad7C1RQcVcTwWcp65iVDnVzkHANyRMS0xk34rAavdb
	oN096Ms6w5/np7UsI/ETnLzl8w89cuTR94aoJPd1RHGZkEXLkgi2ZO1FXLhCuoVSqpJk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] test/vpci: add dummy cfcheck define
Message-Id: <E1olWmC-0001i2-6c@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 14:44:04 +0000

commit b71419530d70d9b1f2ba524aabd27a9efe08f52f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Oct 20 16:36:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 20 16:36:48 2022 +0200

    test/vpci: add dummy cfcheck define
    
    Some vpci functions got the cfcheck attribute added, but that's not
    defined in the user-space test harness, so add a dummy define in order
    for the harness to build.
    
    Fixes: 4ed7d5525f ('xen/vpci: CFI hardening')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/tests/vpci/emul.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/tests/vpci/emul.h b/tools/tests/vpci/emul.h
index 2e1d3057c9..386b15eb86 100644
--- a/tools/tests/vpci/emul.h
+++ b/tools/tests/vpci/emul.h
@@ -37,6 +37,7 @@
 #define prefetch(x) __builtin_prefetch(x)
 #define ASSERT(x) assert(x)
 #define __must_check __attribute__((__warn_unused_result__))
+#define cf_check
 
 #include "list.h"
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 14:44:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 14:44:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426874.675612 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWmN-0002U9-4n; Thu, 20 Oct 2022 14:44:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426874.675612; Thu, 20 Oct 2022 14:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWmN-0002U1-2B; Thu, 20 Oct 2022 14:44:15 +0000
Received: by outflank-mailman (input) for mailman id 426874;
 Thu, 20 Oct 2022 14:44:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmM-0002Tt-Bc
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmM-0001Uz-Ak
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmM-0001ig-9q
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=yjHv0MwDxYJOLh0510tbh+O9SjxxvPy34fBB7FUXOqI=; b=g29jYeed2efKvpVfe/6wYCbbvo
	gAIfpnHBhSdyYJg/i3bCmt+Z4Ya3orcZnTw0PVThIb8NTFnzswwfm7CsblGDf6Y51cqX58MbP4dv+
	U3TG3rYf73lf6yKMUawaEhuYzn+LfUNI++Gh010xRvB3gQAjWgh8ipqMyfgg6zwh0C6k=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] test/vpci: fix vPCI test harness to provide pci_get_pdev()
Message-Id: <E1olWmM-0001ig-9q@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 14:44:14 +0000

commit 1cfccd4b07dd1cf38290d930e2b687c031589db3
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Oct 20 16:37:15 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 20 16:37:15 2022 +0200

    test/vpci: fix vPCI test harness to provide pci_get_pdev()
    
    Instead of pci_get_pdev_by_domain(), which is no longer present in the
    hypervisor.
    
    While there add parentheses around the define value.
    
    Fixes: a37f9ea7a6 ('PCI: fold pci_get_pdev{,_by_domain}()')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/tests/vpci/emul.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/vpci/emul.h b/tools/tests/vpci/emul.h
index 386b15eb86..f03e3a56d1 100644
--- a/tools/tests/vpci/emul.h
+++ b/tools/tests/vpci/emul.h
@@ -92,7 +92,7 @@ typedef union {
 #define xmalloc(type) ((type *)malloc(sizeof(type)))
 #define xfree(p) free(p)
 
-#define pci_get_pdev_by_domain(...) &test_pdev
+#define pci_get_pdev(...) (&test_pdev)
 #define pci_get_ro_map(...) NULL
 
 #define test_bit(...) false
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 14:44:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 14:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426875.675616 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWmX-0002Wq-6E; Thu, 20 Oct 2022 14:44:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426875.675616; Thu, 20 Oct 2022 14:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWmX-0002Wj-3d; Thu, 20 Oct 2022 14:44:25 +0000
Received: by outflank-mailman (input) for mailman id 426875;
 Thu, 20 Oct 2022 14:44:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmW-0002WQ-FA
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmW-0001VK-Dp
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWmW-0001jO-Cp
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:44:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=hdmqQB6KKjNBFMMzewjX4GvETvE0nQoScGk7zTY540Y=; b=vvfS4mDHRZkX6VBKG3o1EAxG94
	NDcI7SUz5K05s60+uuNQ18FV2CNZ4b6gz408MSsb8NiTo59JPFMXYr8QONICOYXOQHdmiQSHJnLZI
	r+KatuZebdzNG6qIK74ZwrX3z2OkbqktBEar4qcos+dLwlj1+UlTCnmMiJw1fEyoZLD4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] test/vpci: enable by default
Message-Id: <E1olWmW-0001jO-Cp@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 14:44:24 +0000

commit e9444d87427a1ac4518ee0a62da5d8803262c6cb
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Oct 20 16:37:29 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 20 16:37:29 2022 +0200

    test/vpci: enable by default
    
    CONFIG_HAS_PCI is not defined for the tools build, and as a result the
    vpci harness would never get build.  Fix this by building it
    unconditionally, there's nothing arch specific in it.
    
    Reported-by: Andrew Cooper <Andrew.Cooper3@citrix.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/tests/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index 33e32730c4..d99146d56a 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -10,7 +10,7 @@ SUBDIRS-$(CONFIG_X86) += x86_emulator
 endif
 SUBDIRS-y += xenstore
 SUBDIRS-y += depriv
-SUBDIRS-$(CONFIG_HAS_PCI) += vpci
+SUBDIRS-y += vpci
 
 .PHONY: all clean install distclean uninstall
 all clean distclean install uninstall: %: subdirs-%
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 14:55:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 14:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426884.675637 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWws-0004AU-Ao; Thu, 20 Oct 2022 14:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426884.675637; Thu, 20 Oct 2022 14:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olWws-0004AM-7x; Thu, 20 Oct 2022 14:55:06 +0000
Received: by outflank-mailman (input) for mailman id 426884;
 Thu, 20 Oct 2022 14:55:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWwq-00048E-JN
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:55:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWwq-0001jt-Fx
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:55:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olWwq-0002As-Ew
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 14:55:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Nk5GiTATwPJruCG3AXwDVgwVzxy1yOvHdh6IHIpudC0=; b=D9wOsAI5/zPzw31RS/FTBHgJRh
	TycMGnR7c7px6ftbjm4Ntm7ovQIJCWfqIDq8e3azefi3s3IvyWLhhPWIHajaxfwVSBnXzSIcjM8yk
	Qy7lLaC8lq1onv/VXjCT6aFGFgMrSrRa/d+yJ2GVg2ywaxJaGn4p2mKCYzxngJ/dMN/c=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/oxenstored: Fix Oxenstored Live Update
Message-Id: <E1olWwq-0002As-Ew@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 14:55:04 +0000

commit 7110192b1df697be84a50f741651d4c3cb129504
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Wed Oct 19 18:12:33 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 20 15:48:22 2022 +0100

    tools/oxenstored: Fix Oxenstored Live Update
    
    tl;dr This hunk was part of the patch emailed to xen-devel, but was missing
    from what ultimately got committed.
    
    https://lore.kernel.org/xen-devel/4164cb728313c3b9fc38cf5e9ecb790ac93a9600.1610748224.git.edvin.torok@citrix.com/
    is the patch in question, but was part of a series that had threading issues.
    I have a vague recollection that I sourced the commits from a local branch,
    which clearly wasn't as up-to-date as I had thought.
    
    Either way, it's my fault/mistake, and this hunk should have been part of what
    got comitted.
    
    Fixes: 00c48f57ab36 ("tools/oxenstored: Start live update process")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/xenstored/xenstored.ml | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index d44ae673c4..fc90fcdeb5 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -352,6 +352,11 @@ let _ =
 		rw_sock
 	) in
 
+	(* required for xenstore-control to detect availability of live-update *)
+	Store.mkdir store Perms.Connection.full_rights (Store.Path.of_string "/tool");
+	Store.write store Perms.Connection.full_rights
+		(Store.Path.of_string "/tool/xenstored") Sys.executable_name;
+
 	Sys.set_signal Sys.sighup (Sys.Signal_handle sighup_handler);
 	Sys.set_signal Sys.sigterm (Sys.Signal_handle (fun _ ->
 		info "Received SIGTERM";
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 20 16:44:09 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Oct 2022 16:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.426977.675760 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olYeM-0003Wv-GX; Thu, 20 Oct 2022 16:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 426977.675760; Thu, 20 Oct 2022 16:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olYeM-0003Wn-DX; Thu, 20 Oct 2022 16:44:06 +0000
Received: by outflank-mailman (input) for mailman id 426977;
 Thu, 20 Oct 2022 16:44:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olYeL-0003Wh-1Q
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 16:44:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olYeK-0004Sl-UZ
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 16:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olYeK-0007Y3-TX
 for xen-changelog@lists.xenproject.org; Thu, 20 Oct 2022 16:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=JFVnIt0Dn4ISepNqgTYjx87PuKHsD8QL33XOyUn+DC4=; b=Y+fzf1PIsZ2oAnz4ZhTzoOgy1D
	bwYGmLYUs0OJBDvppmn/FeJKtViHuXk0nsyCEHDcm06fUAaYNN1SVWsBBoQPeO2mSeJ2T6pogjDHn
	jCcFt26T4xWlTCURIdh9qqQNh3UKk6EfTerQZcH3KVG1TMen3ip6RrslblbOVWg0IkB0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/xendomains: Restrict domid pattern in LIST_GREP
Message-Id: <E1olYeK-0007Y3-TX@xenbits.xenproject.org>
Date: Thu, 20 Oct 2022 16:44:04 +0000

commit 0c06760be3dc3f286015e18c4b1d1694e55da026
Author:     Peter Hoyes <Peter.Hoyes@arm.com>
AuthorDate: Mon Oct 3 15:42:16 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 20 17:38:56 2022 +0100

    tools/xendomains: Restrict domid pattern in LIST_GREP
    
    The xendomains script uses the output of `xl list -l` to collect the
    id and name of each domain, which is used in the shutdown logic, amongst
    other purposes.
    
    The linked commit added a "domid" field to libxl_domain_create_info.
    This causes the output of `xl list -l` to contain two "domid"s per
    domain, which may not be equal. This in turn causes `xendomains stop` to
    issue two shutdown commands per domain, one of which is to a duplicate
    and/or invalid domid.
    
    To work around this, make the LIST_GREP pattern more restrictive for
    domid, so it only detects the domid at the top level and not the domid
    inside c_info.
    
    Fixes: 4a3a25678d92 ("libxl: allow creation of domains with a specified or random domid")
    Signed-off-by: Peter Hoyes <Peter.Hoyes@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/hotplug/Linux/xendomains.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/Linux/xendomains.in b/tools/hotplug/Linux/xendomains.in
index 334d244882..70f4129ef4 100644
--- a/tools/hotplug/Linux/xendomains.in
+++ b/tools/hotplug/Linux/xendomains.in
@@ -211,7 +211,7 @@ get_xsdomid()
     fi
 }
 
-LIST_GREP='(domain\|(domid\|(name\|^    {$\|"name":\|"domid":'
+LIST_GREP='(domain\|(domid\|(name\|^    {$\|"name":\|^        "domid":'
 parseln()
 {
     if [[ "$1" =~ '(domain' ]] || [[ "$1" = "{" ]]; then
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 09:33:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 09:33:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427488.676598 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oloOn-0001OY-Mr; Fri, 21 Oct 2022 09:33:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427488.676598; Fri, 21 Oct 2022 09:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oloOn-0001OQ-It; Fri, 21 Oct 2022 09:33:05 +0000
Received: by outflank-mailman (input) for mailman id 427488;
 Fri, 21 Oct 2022 09:33:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oloOm-0001OK-Ai
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 09:33:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oloOm-00072f-9q
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 09:33:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oloOm-0001Ms-8o
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 09:33:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4wadMxFrzGMS4nYZUfQ5rOXEBedSyZrSckCB5qHLXfs=; b=bOmA13xScq5nsefiiMYs6CF8jB
	DqyAcbIkLwk7gslAiDJipMF6VGcsOZQ2htT60ibrOaHs8K2G1ILt9gvTCzq9pYxU1APMldlIDb/m5
	i729/V4r3yabmFSwDIeBT5ZjF85qDmT28ysor9snC/tgkqlbyICKThpu235RUh4rloIs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] tools/ocaml/xenstored: fix live update exception
Message-Id: <E1oloOm-0001Ms-8o@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 09:33:04 +0000

commit f838b956779ff8a0b94636462f3c6d95c3adeb73
Author:     Edwin Török <edvin.torok@citrix.com>
AuthorDate: Fri Oct 21 08:59:25 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 21 10:28:12 2022 +0100

    tools/ocaml/xenstored: fix live update exception
    
    During live update we will load the /tool/xenstored path from the previous binary,
    and then try to mkdir /tool again which will fail with EEXIST.
    Check for existence of the path before creating it.
    
    The write call to /tool/xenstored should not need any changes
    (and we do want to overwrite any previous path, in case it changed).
    
    Prior to 7110192b1df6 live update would work only if the binary path was
    specified, and with 7110192b1df6 and this live update also works when
    no binary path is specified in `xenstore-control live-update`.
    
    Fixes: 7110192b1df6 ("tools/oxenstored: Fix Oxenstored Live Update")
    Signed-off-by: Edwin Török <edvin.torok@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/xenstored/xenstored.ml | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index fc90fcdeb5..acc7290627 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -353,7 +353,9 @@ let _ =
 	) in
 
 	(* required for xenstore-control to detect availability of live-update *)
-	Store.mkdir store Perms.Connection.full_rights (Store.Path.of_string "/tool");
+	let tool_path = Store.Path.of_string "/tool" in
+	if not (Store.path_exists store tool_path) then
+		Store.mkdir store Perms.Connection.full_rights tool_path;
 	Store.write store Perms.Connection.full_rights
 		(Store.Path.of_string "/tool/xenstored") Sys.executable_name;
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 10:22:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 10:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427550.676728 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpAD-0003xT-O2; Fri, 21 Oct 2022 10:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427550.676728; Fri, 21 Oct 2022 10:22:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpAD-0003xL-LF; Fri, 21 Oct 2022 10:22:05 +0000
Received: by outflank-mailman (input) for mailman id 427550;
 Fri, 21 Oct 2022 10:22:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpAC-0003xD-Nr
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:22:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpAC-00082t-LQ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:22:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpAC-0003ix-KX
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:22:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=aHf3s60Z2cI1hqIM0YmKyyrkk9di/CnvBeh2Nl4XnME=; b=jLJG5/aJrmzswiwLtI5CzicqOb
	oz4fOlld4GfalTvVGKVsFbj/mLZtqsfYXNjAg2iAB0u3s9tJpz7CtboohiTWcXA0xrKgnbafAWg17
	QL2UvQn3WbZifOu1Px2xkPbuL2NAdHH0xuo9dlliqrttUl86XW/eDojtpg8wxs+Ei5Xw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: mark handle_linux_pci_domain() __init
Message-Id: <E1olpAC-0003ix-KX@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 10:22:04 +0000

commit e0347046445a2c6245f6a04093e7e831100611a1
Author:     Stewart Hildebrand <stewart.hildebrand@amd.com>
AuthorDate: Fri Oct 14 16:09:26 2022 -0400
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 21 11:09:59 2022 +0100

    xen/arm: mark handle_linux_pci_domain() __init
    
    All functions in domain_build.c should be marked __init. This was
    spotted when building the hypervisor with -Og.
    
    Fixes: 1050a7b91c2e ("xen/arm: add pci-domain for disabled devices")
    Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/domain_build.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index db97536fe8..4fb5c20b13 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1051,8 +1051,8 @@ static void __init assign_static_memory_11(struct domain *d,
  * The current heuristic assumes that a device is a host bridge
  * if the type is "pci" and then parent type is not "pci".
  */
-static int handle_linux_pci_domain(struct kernel_info *kinfo,
-                                   const struct dt_device_node *node)
+static int __init handle_linux_pci_domain(struct kernel_info *kinfo,
+                                          const struct dt_device_node *node)
 {
     uint16_t segment;
     int res;
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 10:22:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 10:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427551.676732 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpAN-0003zs-PS; Fri, 21 Oct 2022 10:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427551.676732; Fri, 21 Oct 2022 10:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpAN-0003zk-Ms; Fri, 21 Oct 2022 10:22:15 +0000
Received: by outflank-mailman (input) for mailman id 427551;
 Fri, 21 Oct 2022 10:22:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpAM-0003za-PH
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:22:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpAM-000834-OZ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:22:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpAM-0003jO-Nc
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:22:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=IMYPiWLK8mEdwsaaM8Zks+vQjpvPFxLMYIaf4s0mWFI=; b=FV1I8OPGFCvekUPHlW1m39jb7B
	SxLJYILxzExFH7LoAKyaAjmC2py4ylVR/QDEyPtPBVxLUwms5EnKUCCh5QGMaVhGrJrLsBcUumFrc
	CeuxUBG0eEH58UolOO/DS1ra4MTsjbkx+vOV23YbKf/ktvLKEpmYntLLQ7yDFIeUSNRg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/arm: p2m: fix pa_range_info for 52-bit pa range
Message-Id: <E1olpAM-0003jO-Nc@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 10:22:14 +0000

commit 974c8d810a1daacb3322015cd1c124d26155fc75
Author:     Xenia Ragiadakou <burzalodowa@gmail.com>
AuthorDate: Wed Oct 19 17:49:13 2022 +0300
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 21 11:15:25 2022 +0100

    xen/arm: p2m: fix pa_range_info for 52-bit pa range
    
    Currently, the fields 'root_order' and 'sl0' of the pa_range_info for
    the 52-bit pa range have the values 3 and 3, respectively.
    This configuration does not match any of the valid root table configurations
    for 4KB granule and t0sz 12, described in ARM DDI 0487I.a D8.2.7.
    
    More specifically, according to ARM DDI 0487I.a D8.2.7, in order to support
    the 52-bit pa size with 4KB granule, the p2m root table needs to be configured
    either as a single table at level -1 or as 16 concatenated tables at level 0.
    Since, currently there is no support for level -1, set the 'root_order' and
    'sl0' fields of the 52-bit pa_range_info according to the second approach.
    
    Note that the values of those fields are not used so far. This patch updates
    their values only for the sake of correctness.
    
    Fixes: 407b13a71e32 ("xen/arm: p2m don't fall over on FEAT_LPA enabled hw")
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 00d05bb708..94d3b60b13 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2281,7 +2281,7 @@ void __init setup_virt_paging(void)
         [3] = { 42,      22/*22*/,  3,          1 },
         [4] = { 44,      20/*20*/,  0,          2 },
         [5] = { 48,      16/*16*/,  0,          2 },
-        [6] = { 52,      12/*12*/,  3,          3 },
+        [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
     };
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 10:44:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 10:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427580.676791 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpVV-0000Yl-Oh; Fri, 21 Oct 2022 10:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427580.676791; Fri, 21 Oct 2022 10:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpVV-0000Yd-Ly; Fri, 21 Oct 2022 10:44:05 +0000
Received: by outflank-mailman (input) for mailman id 427580;
 Fri, 21 Oct 2022 10:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpVU-0000YQ-Dx
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpVU-0008P0-Ab
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpVU-0004nr-9g
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ulrv3SgDVBSXRnG2KyP9PHo9bV3riOD2mSOFNQ6QSA8=; b=Ty+fEYg6LwGHoVl7NWHmhMKrnE
	rypSUnqAFf5ouE0fUh+wUt2IPp0KlD9DpRiygXqnUwJPCb6et+XH9hBstc2aJL3AbNvvRsNCDeVuQ
	UegkVrkSIuW1IigiDGNp/cWjoAeXHSs+DsRJDKRT2cwlB4DXUzaUicjrclCH2EBxaTkg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] EFI: don't convert memory marked for runtime use to ordinary RAM
Message-Id: <E1olpVU-0004nr-9g@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 10:44:04 +0000

commit f324300c8347b6aa6f9c0b18e0a90bbf44011a9a
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Oct 21 12:30:24 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 21 12:30:24 2022 +0200

    EFI: don't convert memory marked for runtime use to ordinary RAM
    
    efi_init_memory() in both relevant places is treating EFI_MEMORY_RUNTIME
    higher priority than the type of the range. To avoid accessing memory at
    runtime which was re-used for other purposes, make
    efi_arch_process_memory_map() follow suit. While in theory the same would
    apply to EfiACPIReclaimMemory, we don't actually "reclaim" or clobber
    that memory (converted to E820_ACPI on x86) there (and it would be a bug
    if the Dom0 kernel tried to reclaim the range, bypassing Xen's memory
    management, plus it would be at least bogus if it clobbered that space),
    hence that type's handling can be left alone.
    
    Fixes: bf6501a62e80 ("x86-64: EFI boot code")
    Fixes: facac0af87ef ("x86-64: EFI runtime code")
    Fixes: 6d70ea10d49f ("Add ARM EFI boot support")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/efi/efi-boot.h | 3 ++-
 xen/arch/x86/efi/efi-boot.h | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index 59d93c24a1..43a836c3a7 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -183,7 +183,8 @@ static EFI_STATUS __init efi_process_memory_map_bootinfo(EFI_MEMORY_DESCRIPTOR *
 
     for ( Index = 0; Index < (mmap_size / desc_size); Index++ )
     {
-        if ( desc_ptr->Attribute & EFI_MEMORY_WB &&
+        if ( !(desc_ptr->Attribute & EFI_MEMORY_RUNTIME) &&
+             (desc_ptr->Attribute & EFI_MEMORY_WB) &&
              (desc_ptr->Type == EfiConventionalMemory ||
               desc_ptr->Type == EfiLoaderCode ||
               desc_ptr->Type == EfiLoaderData ||
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 836e8c2ba1..e82ac9daa7 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -185,7 +185,9 @@ static void __init efi_arch_process_memory_map(EFI_SYSTEM_TABLE *SystemTable,
             /* fall through */
         case EfiLoaderCode:
         case EfiLoaderData:
-            if ( desc->Attribute & EFI_MEMORY_WB )
+            if ( desc->Attribute & EFI_MEMORY_RUNTIME )
+                type = E820_RESERVED;
+            else if ( desc->Attribute & EFI_MEMORY_WB )
                 type = E820_RAM;
             else
         case EfiUnusableMemory:
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 10:44:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 10:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427582.676806 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpVg-0000vr-1l; Fri, 21 Oct 2022 10:44:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427582.676806; Fri, 21 Oct 2022 10:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olpVf-0000vk-V5; Fri, 21 Oct 2022 10:44:15 +0000
Received: by outflank-mailman (input) for mailman id 427582;
 Fri, 21 Oct 2022 10:44:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpVe-0000up-Eg
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:44:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpVe-0008PB-Db
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:44:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olpVe-0004oP-Ci
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 10:44:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=wnNtRajPf229FC95tpfwzYaQFholfCGf26Z1FVSApAc=; b=4kAdIhb2uNV6COJHqTjXQqPIUU
	mOu6ZRZAYEV4gCqlN58WrBcybwF/9U1GlHUgSbmtWCn5SK8t9q5lDNMz9SA9y2NAtmnWxnnCcqpM3
	PalWGl6ZzYESIVsny8MNTQZsawBQKK+tYhTsqB5dqr8yOgN2TmJZa0LEuDnaxa0k3LIU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/sched: fix race in RTDS scheduler
Message-Id: <E1olpVe-0004oP-Ci@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 10:44:14 +0000

commit 73c62927f64ecb48f27d06176befdf76b879f340
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Fri Oct 21 12:32:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 21 12:32:23 2022 +0200

    xen/sched: fix race in RTDS scheduler
    
    When a domain gets paused the unit runnable state can change to "not
    runnable" without the scheduling lock being involved. This means that
    a specific scheduler isn't involved in this change of runnable state.
    
    In the RTDS scheduler this can result in an inconsistency in case a
    unit is losing its "runnable" capability while the RTDS scheduler's
    scheduling function is active. RTDS will remove the unit from the run
    queue, but doesn't do so for the replenish queue, leading to hitting
    an ASSERT() in replq_insert() later when the domain is unpaused again.
    
    Fix that by removing the unit from the replenish queue as well in this
    case.
    
    Fixes: 7c7b407e7772 ("xen/sched: introduce unit_runnable_state()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/sched/rt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index d6de25531b..960a8033e2 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -1087,6 +1087,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit,
         else if ( !unit_runnable_state(snext->unit) )
         {
             q_remove(snext);
+            replq_remove(ops, snext);
             snext = rt_unit(sched_idle_unit(sched_cpu));
         }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 12:22:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 12:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427650.676927 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olr2J-0008MX-4w; Fri, 21 Oct 2022 12:22:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427650.676927; Fri, 21 Oct 2022 12:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olr2J-0008MP-1u; Fri, 21 Oct 2022 12:22:03 +0000
Received: by outflank-mailman (input) for mailman id 427650;
 Fri, 21 Oct 2022 12:22:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olr2I-0008M2-3p
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 12:22:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olr2I-0001qm-2u
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 12:22:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olr2I-0000nM-1r
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 12:22:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=S99P+XqkcWyPW4SIK+Ld5Uq0oq17cKs9irasE7iWnd4=; b=gL9Bhm3e94kjmaUMit9a8rxQdL
	PYO/AyhxzD8fItFEYGR9S5Qzfhmu8paUwUhnRut9zg6dhCSkWpgkn6QfOd8RoUeSKERWCGvpnin0G
	qqtRUziuEewiXi4C+9XfzT1eJOtgulMiIFjjCOkn5ho3MKSmkXmdy8IPx38eduuvi9Fk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [qemu-xen stable-4.16] ebpf: replace deprecated bpf_program__set_socket_filter
Message-Id: <E1olr2I-0000nM-1r@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 12:22:02 +0000

commit 62dd49f2172fb7dfe8d4223bfa45aede05155328
Author:     Haochen Tong <i@hexchain.org>
AuthorDate: Sat May 28 03:06:58 2022 +0800
Commit:     Anthony PERARD <anthony.perard@gmail.com>
CommitDate: Thu Oct 20 14:39:06 2022 +0100

    ebpf: replace deprecated bpf_program__set_socket_filter
    
    bpf_program__set_<TYPE> functions have been deprecated since libbpf 0.8.
    Replace with the equivalent bpf_program__set_type call to avoid a
    deprecation warning.
    
    Signed-off-by: Haochen Tong <i@hexchain.org>
    Reviewed-by: Zhang Chen <chen.zhang@intel.com>
    Signed-off-by: Jason Wang <jasowang@redhat.com>
    (cherry picked from commit a495eba03c31c96d6a0817b13598ce2219326691)
---
 ebpf/ebpf_rss.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ebpf/ebpf_rss.c b/ebpf/ebpf_rss.c
index 118c68da83..cee658c158 100644
--- a/ebpf/ebpf_rss.c
+++ b/ebpf/ebpf_rss.c
@@ -49,7 +49,7 @@ bool ebpf_rss_load(struct EBPFRSSContext *ctx)
         goto error;
     }
 
-    bpf_program__set_socket_filter(rss_bpf_ctx->progs.tun_rss_steering_prog);
+    bpf_program__set_type(rss_bpf_ctx->progs.tun_rss_steering_prog, BPF_PROG_TYPE_SOCKET_FILTER);
 
     if (rss_bpf__load(rss_bpf_ctx)) {
         trace_ebpf_error("eBPF RSS", "can not load RSS program");
--
generated by git-patchbot for /home/xen/git/qemu-xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:33:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:33:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427843.677289 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxD-0000XA-Kc; Fri, 21 Oct 2022 16:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427843.677289; Fri, 21 Oct 2022 16:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxD-0000X3-I3; Fri, 21 Oct 2022 16:33:03 +0000
Received: by outflank-mailman (input) for mailman id 427843;
 Fri, 21 Oct 2022 16:33:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxC-0000Wv-HQ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxC-00072O-Gf
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxC-0004UA-Ff
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=mps2bpdtNuHtjlqwJqb5qZl4nrenX7vySOb53T224z8=; b=QfKrqhWtjWQfAj4bn7wi1M/q1r
	WIFGoa16AxikbNtRUYyCbiLA+UAwAIJDUCrBxrQHCOlDKrjy63v0a2VQpzNDL2s4PaMZTS8dssPaR
	+gtWg4vNtR4wvePbeKwKDmtFfsda20UYJY6Pz8+K+nvhkAi5yvv1U9QU8wT4WKjkz4zI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oluxC-0004UA-Ff@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:33:02 +0000

commit 3ebe773293e3b945460a3d6f54f3b91915397bab
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Mon Jun 6 06:17:25 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:20:18 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8449f97fe7..c2e0b116c4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1092,6 +1092,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1634,6 +1643,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:33:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427844.677293 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxN-0000ZJ-Mo; Fri, 21 Oct 2022 16:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427844.677293; Fri, 21 Oct 2022 16:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxN-0000ZA-Ji; Fri, 21 Oct 2022 16:33:13 +0000
Received: by outflank-mailman (input) for mailman id 427844;
 Fri, 21 Oct 2022 16:33:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxM-0000Yz-Kd
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxM-00072S-Jv
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxM-0004Ud-Iv
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=njgBdhx9xLJcJAzjgFgSUruKVwqBbxrZBT+ZxRx6sCI=; b=B8e4X0SVodiFs+5Bl74K7uMkbg
	f14udJbbCQhlWTH+XAzUuz/c72pL0u2GhpEFkw5UgtPioEAx4lp6TR8mEToWE3NTuBKWFEA4uJRNe
	Wd/7rtG8EiDTxkyjMojZn8Syj5dtVNJC3QfSiGlMylI0167BGk5ZJ0fKyzir8mwAUOb4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oluxM-0004Ud-Iv@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:33:12 +0000

commit 3202084566bba0ef0c45caf8c24302f83d92f9c8
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Mon Jun 6 06:17:26 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:20:56 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/domain.c          | 10 +++++++--
 xen/arch/arm/include/asm/p2m.h | 13 ++++++++++--
 xen/arch/arm/p2m.c             | 47 +++++++++++++++++++++++++++++++++++++++---
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2d6253181a..746ad3438a 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -795,10 +795,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -1001,6 +1001,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1061,6 +1062,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 8cce459b67..a15ea67f9b 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -192,8 +192,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c2e0b116c4..b445f4d754 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1551,17 +1551,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:33:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:33:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427845.677298 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxX-0000cT-OR; Fri, 21 Oct 2022 16:33:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427845.677298; Fri, 21 Oct 2022 16:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxX-0000cL-LI; Fri, 21 Oct 2022 16:33:23 +0000
Received: by outflank-mailman (input) for mailman id 427845;
 Fri, 21 Oct 2022 16:33:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxW-0000cC-Nx
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxW-00072j-NB
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxW-0004Va-MJ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ObJpZFgPqvgdGeWtbpKemGJQsJbbxtofc9huSSRXnlk=; b=LSw4t9ktSRGidgjxtkY3Ger99V
	aLOyXFER9m1YS/V0ycSig8ubWD4pUbnEbfmHMKotDnV2NddP7INdHpXMKS4gwtH6VCGGmdcQvMqaQ
	wG6Sc9sWR/zyuQoQes0f3PAChPTQbl5y6r0KVjzf8g07S8m33Qd2qyL71hC8PO8/dohQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oluxW-0004Va-MJ@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:33:22 +0000

commit 1df52a270225527ae27bfa2fc40347bf93b78357
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:21:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:21:23 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/include/asm/p2m.h  |  2 +-
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m-basic.c     | 18 ++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 4 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h
index 0a0f7114f3..bafbd96052 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -600,7 +600,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add/remove a page to/from a domain's p2m table. */
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 79929774e8..9e0b725c59 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -541,18 +541,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c
index 9130fc2a70..3231aaa9ba 100644
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -154,10 +154,10 @@ int p2m_init(struct domain *d)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 {
 #ifdef CONFIG_HVM
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if ( !p2m )
@@ -171,10 +171,20 @@ void p2m_teardown(struct p2m_domain *p2m)
     ASSERT(atomic_read(&d->shr_pages) == 0);
 #endif
 
-    p2m->phys_table = pagetable_null();
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
 
     p2m_unlock(p2m);
 #endif
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0247f0c84e..3e1e43a389 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2707,7 +2707,7 @@ int shadow_enable(struct domain *d, u32 mode)
  out_unlocked:
 #ifdef CONFIG_HVM
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
 #endif
     if ( rv != 0 && pg != NULL )
     {
@@ -2873,7 +2873,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:33:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427846.677301 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxh-0000ez-PV; Fri, 21 Oct 2022 16:33:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427846.677301; Fri, 21 Oct 2022 16:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxh-0000er-Mt; Fri, 21 Oct 2022 16:33:33 +0000
Received: by outflank-mailman (input) for mailman id 427846;
 Fri, 21 Oct 2022 16:33:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxg-0000ek-R5
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxg-00072u-QI
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxg-0004W9-PN
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rCQdQXQ6BQ9gE+hbVLpwS3HjnvRPOSIvelUhamZ/SPQ=; b=BMKT3gCp9TMx9ytY/P7YsdsbQK
	jSfgRrfKN33HtaJ1C7zrGHjD3nic12LZPL0eIqR+39w16kygO8tizQcwlu6wumx0RcDG8QAbTeBmJ
	Ep//EQ5RztKHDKVb5GaV+KFO826cNgBWw0gTI9pLFUgXXuCYalewfcoUc+QXuhKjsMhY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oluxg-0004W9-PN@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:33:32 +0000

commit 5b44a61180f4f2e4f490a28400c884dd357ff45d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:21:56 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:21:56 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9e0b725c59..691d5d2dd1 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -763,6 +769,9 @@ static void cf_check hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -771,6 +780,7 @@ static void cf_check hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:33:43 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:33:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427847.677305 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxr-0000hk-RS; Fri, 21 Oct 2022 16:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427847.677305; Fri, 21 Oct 2022 16:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluxr-0000hb-ON; Fri, 21 Oct 2022 16:33:43 +0000
Received: by outflank-mailman (input) for mailman id 427847;
 Fri, 21 Oct 2022 16:33:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxq-0000hP-V2
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxq-00073C-UF
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluxq-0004Wa-Sz
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=CZ519qm986YSxxWJjZ7UhIi60AtsUismDzs6IwAx8/A=; b=Oa1W5ASzi8H5KCxqI9SxVficZF
	HjIS0Ahc2xW+01Ut+88XSwmrS+6Kua3b1v+qkwpxu+PWs3BRh59YDgk+kDNvy7Fhcg3BIBlE6MWOu
	jE4dlZdFdYImjT3fbsNM1ODPSuk/eGzTSRmw04kE7+sDSdv3ucbg4UpIdOKQf5CwOMyc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oluxq-0004Wa-Sz@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:33:42 +0000

commit eac000978c1feb5a9ee3236ab0c0da9a477e5336
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:22:24 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:22:24 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/mm/shadow/common.c |  1 +
 xen/arch/x86/mm/shadow/multi.c  | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 3e1e43a389..a1961291a2 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2521,6 +2521,7 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index e10de449f1..a51ec5d4f5 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3316,6 +3316,11 @@ static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
@@ -3372,6 +3377,11 @@ static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #else
 #error This should never happen
 #endif
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:33:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:33:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427848.677311 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluy1-0000kv-Vn; Fri, 21 Oct 2022 16:33:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427848.677311; Fri, 21 Oct 2022 16:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluy1-0000kn-SL; Fri, 21 Oct 2022 16:33:53 +0000
Received: by outflank-mailman (input) for mailman id 427848;
 Fri, 21 Oct 2022 16:33:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluy1-0000kc-2F
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluy1-00073d-1U
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluy1-0004YY-0c
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:33:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=zd/y4K9RwQOmRqCTc4cFwer1O8wFWo59g01nQSGjqFs=; b=f72FgZm0zTJG1tc3BLct9cJ36r
	1Ek8qbbBVDSHU5E6mAqHzcNiPgVMFaF1BbER1ntSTAf8NVBOGvFy/aS4hbHxalDQedJNS33vLcHow
	3krQW0BRo9wTUkVLYBTCHZJ3cBQIzr/RtlyOmw0xCpU63PA3nOhs+zylrSQqrD+9f19w=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oluy1-0004YY-0c@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:33:53 +0000

commit b7f93c6afb12b6061e2d19de2f39ea09b569ac68
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:22:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:22:53 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/shadow/common.c  | 69 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/hvm.c     |  4 ++-
 xen/arch/x86/mm/shadow/multi.c   | 11 +++++--
 xen/arch/x86/mm/shadow/private.h |  3 +-
 4 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index a1961291a2..5b24be5325 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -928,14 +929,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -951,7 +953,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -974,7 +977,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -987,7 +990,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -995,9 +1003,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1218,7 +1236,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *cf_check
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1236,16 +1254,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1336,7 +1356,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2339,12 +2361,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2408,6 +2431,9 @@ static void sh_update_paging_modes(struct vcpu *v)
             mfn_t mmfn = sh_make_monitor_table(
                              v, v->arch.paging.mode->shadow.shadow_levels);
 
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2446,6 +2472,12 @@ static void sh_update_paging_modes(struct vcpu *v)
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = sh_make_monitor_table(
                               v, v->arch.paging.mode->shadow.shadow_levels);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    sh_destroy_monitor_table(v, old_mfn,
+                                             old_mode->shadow.shadow_levels);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
@@ -2531,7 +2563,12 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index c084bc8ed7..29a58d9131 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -697,7 +697,9 @@ mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels)
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
+
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
     mfn_to_page(m4mfn)->shadow_flags = 4;
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index a51ec5d4f5..2370b30602 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2447,9 +2447,14 @@ static int cf_check sh_page_fault(
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 3a74f45362..85bb26c7ea 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -383,7 +383,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:34:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:34:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427849.677313 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyC-0000oS-0X; Fri, 21 Oct 2022 16:34:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427849.677313; Fri, 21 Oct 2022 16:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyB-0000oK-Tz; Fri, 21 Oct 2022 16:34:03 +0000
Received: by outflank-mailman (input) for mailman id 427849;
 Fri, 21 Oct 2022 16:34:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyB-0000oC-5P
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyB-000742-4d
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyB-0004Z9-3o
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=U97ybA5+qMOIM/rV0FTzg9PUpeaIaJZhCCrKFnGxAQg=; b=gMOo7oRjm1sYkDkCfSd2aJ5fBR
	ZtNGFOgxc8thw+XLvC5/tfPuro8EzYZXx4e2HVOepcqWkKKVULfsjieI3A0PP/0fJpFj43rkDOgq9
	yThSwh5SwiX/opYz56pl8/7tVqexVWk2dJTj2w5OcvIBWXVTA/v+DhbxsB7lyAjqNFKA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oluyB-0004Z9-3o@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:34:03 +0000

commit ff600a8cf8e36f8ecbffecf96a035952e022ab87
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:23:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:23:22 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 691d5d2dd1..9ce2123c42 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -245,6 +245,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -281,7 +284,7 @@ static struct page_info *cf_check hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 5b24be5325..8cca19ef84 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -939,6 +939,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -991,7 +995,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1005,10 +1009,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1238,6 +1245,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:34:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:34:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427850.677316 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyN-0000rQ-1i; Fri, 21 Oct 2022 16:34:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427850.677316; Fri, 21 Oct 2022 16:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyM-0000rI-VV; Fri, 21 Oct 2022 16:34:14 +0000
Received: by outflank-mailman (input) for mailman id 427850;
 Fri, 21 Oct 2022 16:34:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyL-0000r6-8h
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyL-00074C-82
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyL-0004aJ-76
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=n/kJeONxf+SEBXeKjNEhlM3LPbpVlAuqtNA1WjSIZ+M=; b=zoh93+1ZElpaZ3ofSs/VV3+074
	ctDREj08gp+nLAVUxQdjiVRhupqEkO0FKdAySgQm/YtgQclVpnEnah1YUgRlK6y1Lmox0U+12NOUy
	fRfs+FLAvUCAlw6NRtmBpKiKzJ25qWxJ68kCDMqmC9PiS3Kj6XssbYt2VNyfL9XtmpwI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oluyL-0004aJ-76@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:34:13 +0000

commit f50a2c0e1d057c00d6061f40ae24d068226052ad
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:23:51 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:23:51 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9ce2123c42..dbdf4f6dd1 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -265,6 +265,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8cca19ef84..ec2fc678fa 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1187,6 +1187,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1229,11 +1230,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1303,9 +1325,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:34:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427851.677320 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyX-0000uu-3a; Fri, 21 Oct 2022 16:34:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427851.677320; Fri, 21 Oct 2022 16:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyX-0000uk-0j; Fri, 21 Oct 2022 16:34:25 +0000
Received: by outflank-mailman (input) for mailman id 427851;
 Fri, 21 Oct 2022 16:34:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyV-0000uS-CD
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyV-00074N-BS
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyV-0004bW-Ac
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=TMzRSOaLuZUzDn59YK6LoYH1xeWO8zy5TqL7YZpApb4=; b=ebtWExD9maWbD9SaKBj90nFasz
	2/luVa218y9fcCgvzkF+tpJ39+L/eIfl7DbgE/hROQ6f9I8k9gkZL9EieG6yrwd0RtGFE52RFZ1HB
	nUnMa10TQgR9RpWI4YDd5Ze2h/qLg+3Ta/Xqs9u9xZAppjAlI6y1zlA5XmQLU95Z+lY4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oluyV-0004bW-Ac@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:34:23 +0000

commit e7aa55c0aab36d994bf627c92bd5386ae167e16e
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:24:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:24:21 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 42 +++++++++++++++++++++++++----------------
 xen/arch/x86/mm/shadow/common.c | 12 ++++++++++++
 3 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 41e1e3f272..a5d2d66852 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2406,12 +2405,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index dbdf4f6dd1..d058050d63 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -546,24 +547,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -578,6 +563,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -603,6 +590,7 @@ void hap_vcpu_teardown(struct vcpu *v)
 void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -611,6 +599,28 @@ void hap_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         hap_vcpu_teardown(v);
 
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d); /* Keep various asserts happy */
 
     if ( d->arch.paging.hap.total_pages != 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ec2fc678fa..64ca18b393 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2831,8 +2831,17 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d);
 
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2953,6 +2962,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:34:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:34:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427852.677324 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyh-0000yN-71; Fri, 21 Oct 2022 16:34:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427852.677324; Fri, 21 Oct 2022 16:34:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyh-0000yF-4N; Fri, 21 Oct 2022 16:34:35 +0000
Received: by outflank-mailman (input) for mailman id 427852;
 Fri, 21 Oct 2022 16:34:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyf-0000xv-FV
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyf-00074R-Eh
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyf-0004cJ-Dx
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=AL0FKaS0jAzmPVQjAYiqB4G3lA380TEG9BvBfXZcNfw=; b=I8LPBV2S3dogXFf9ziauBgBmGF
	fq6X1rPzGR9E9r6CeGo1IrkLbm8GzGHfXOwgDALa49VQRwGfUrXHQ2j4EsP410HgJewEsOUvuHFRn
	/uHglvCHjmtywKsDmnDxMkSgfJsCEya6FVJyeJo22GPRbj5aInG9bgNx7L8U3JOWATAA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oluyf-0004cJ-Dx@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:34:33 +0000

commit 8a2111250b424edc49c65c4d41b276766d30635c
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:24:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:24:48 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    ----
    Changes since v12:
        - Correct altp2m preemption check placement.
    
    Changes since v9:
        - Integrate patch into series.
    
    Changes since v2:
        - Rework the loop doing the preemption
        - Add a comment in shadow_enable() to explain why p2m_teardown()
          doesn't need to be preemptible.
    
    Changes since v1:
        - Update the commit message
        - Rebase on top of Roger's v8 series
        - Fix preemption check
        - Use 'unsigned int' rather than 'unsigned long' for the counter
---
 xen/arch/x86/include/asm/p2m.h  |  2 +-
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m-basic.c     | 19 ++++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 4 files changed, 42 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h
index bafbd96052..bd684d02f3 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -600,7 +600,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add/remove a page to/from a domain's p2m table. */
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d058050d63..f809ea9aa6 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -548,17 +548,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -612,14 +612,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d); /* Keep various asserts happy */
 
diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c
index 3231aaa9ba..47b780d6d6 100644
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -23,6 +23,7 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/event.h>
 #include <xen/types.h>
 #include <asm/p2m.h>
 #include "mm-locks.h"
@@ -154,11 +155,12 @@ int p2m_init(struct domain *d)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 {
 #ifdef CONFIG_HVM
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if ( !p2m )
         return;
@@ -180,8 +182,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 64ca18b393..d985d51614 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2776,8 +2776,12 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
 #ifdef CONFIG_HVM
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
 #endif
     if ( rv != 0 && pg != NULL )
     {
@@ -2831,7 +2835,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2952,7 +2958,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:34:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:34:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427853.677329 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyr-00011A-8c; Fri, 21 Oct 2022 16:34:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427853.677329; Fri, 21 Oct 2022 16:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluyr-000113-5t; Fri, 21 Oct 2022 16:34:45 +0000
Received: by outflank-mailman (input) for mailman id 427853;
 Fri, 21 Oct 2022 16:34:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyp-00010l-Ir
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyp-00074V-I8
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyp-0004d6-H8
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=O2fsze5eipRKKx0Jg6whij7QPAiqkT0i1oyMxnFWHnI=; b=R53gQsXki+mEiz0n6q4GWKWG44
	dfMnYIVtRzgUWLEJ4X+yac/MKQRoKi4FxFs5lDQDTM2CvsBqlMn9iUlgcBaBox7UGwyYWi/cen10G
	CIrOLEMbTbZT15tHma7Se/qZyEin/g5hOhQHCt6Rfihq3NS0CmWXvKlaMZwmQpJbtAJY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] libxl, docs: Add per-arch extra default paging memory
Message-Id: <E1oluyp-0004d6-H8@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:34:43 +0000

commit 156a239ea288972425f967ac807b3cb5b5e14874
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:27 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:37 2022 +0200

    libxl, docs: Add per-arch extra default paging memory
    
    This commit adds a per-arch macro `EXTRA_DEFAULT_PAGING_MEM_MB`
    to the default paging memory size, in order to cover the p2m
    pool for extended regions of a xl-based guest on Arm.
    
    For Arm, the extra default paging memory is 128MB.
    For x86, the extra default paging memory is zero, since there
    are no extended regions on x86.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 docs/man/xl.cfg.5.pod.in        |  5 +++++
 tools/libs/light/libxl_arch.h   | 11 +++++++++++
 tools/libs/light/libxl_create.c |  7 ++++++-
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index b2901e04cf..31e58b73b0 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2725,6 +2725,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is the same as x86 HAP mode, plus 512KB to
+cover the extended regions. Users should adjust this value if bigger
+P2M pool size is needed.
+
 =back
 
 =head2 Device-Model Options
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 03b89929e6..247cca130f 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -99,10 +99,21 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
 #define ACPI_INFO_PHYSICAL_ADDRESS 0xfc000000
+#define EXTRA_DEFAULT_PAGING_MEM_MB 0
 
 int libxl__dom_load_acpi(libxl__gc *gc,
                          const libxl_domain_build_info *b_info,
                          struct xc_dom_image *dom);
+
+#else
+
+/*
+ * 128MB extra default paging memory on Arm for extended regions. This
+ * value is normally enough for domains that are not running backend.
+ * See the `shadow_memory` in xl.cfg documentation for more information.
+ */
+#define EXTRA_DEFAULT_PAGING_MEM_MB 128
+
 #endif
 
 #endif
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index b9dd2deedf..612eacfc7f 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1035,12 +1035,17 @@ unsigned long libxl__get_required_paging_memory(unsigned long maxmem_kb,
      * plus 1 page per MiB of RAM for the P2M map (for non-PV guests),
      * plus 1 page per MiB of RAM to shadow the resident processes (for shadow
      * mode guests).
+     * plus 1 page per MiB of RAM for the architecture specific
+     * EXTRA_DEFAULT_PAGING_MEM_MB. On x86, this value is zero. On Arm, this
+     * value is 128 MiB to cover domain extended regions (enough for domains
+     * that are not running backend).
      * This is higher than the minimum that Xen would allocate if no value
      * were given (but the Xen minimum is for safety, not performance).
      */
     return 4 * (256 * smp_cpus +
                 ((type != LIBXL_DOMAIN_TYPE_PV) + !hap) *
-                (maxmem_kb / 1024));
+                (maxmem_kb / 1024) +
+                EXTRA_DEFAULT_PAGING_MEM_MB);
 }
 
 static unsigned long libxl__get_required_iommu_memory(unsigned long maxmem_kb)
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:34:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:34:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427854.677333 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluz1-00014s-A4; Fri, 21 Oct 2022 16:34:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427854.677333; Fri, 21 Oct 2022 16:34:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluz1-00014j-7L; Fri, 21 Oct 2022 16:34:55 +0000
Received: by outflank-mailman (input) for mailman id 427854;
 Fri, 21 Oct 2022 16:34:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyz-00014U-Lk
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyz-00074s-L2
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluyz-0004dy-KL
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:34:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=yapVQFBUI7UEA7ZO3ah1y7HSmuvA6yBCRIbHZDFH2no=; b=b3V4Ms28wKIZly7afB2PRJbxD0
	JxRUqvasX+AJ2He0ByR93KH6Pksvk32qpPJo7sTNyY/b1EgAisdiVRek31RKJUp8QOEzn54U4a2G1
	egMEpFYwqRCAqF1Vm2h58aXnxtHP/uTnrLUExXGZqpp7dEFLWFVxyLc0ydTo2ERzWVMk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oluyz-0004dy-KL@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:34:53 +0000

commit 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:28 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:39 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/include/asm/domain.h | 10 +++++
 xen/arch/arm/include/asm/p2m.h    |  4 ++
 xen/arch/arm/p2m.c                | 88 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 26a8348eed..2ce6764322 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -53,6 +53,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -64,6 +72,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index a15ea67f9b..42bfd548c4 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -218,6 +218,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index b445f4d754..db385fe410 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -44,6 +44,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1623,7 +1709,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:35:05 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427855.677337 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzB-00017w-Bp; Fri, 21 Oct 2022 16:35:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427855.677337; Fri, 21 Oct 2022 16:35:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzB-00017o-91; Fri, 21 Oct 2022 16:35:05 +0000
Received: by outflank-mailman (input) for mailman id 427855;
 Fri, 21 Oct 2022 16:35:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluz9-00017X-Og
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluz9-00075J-Ns
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluz9-0004fK-ND
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RWFFYna18fJ9K5i7yQp8w9b7jOYe+fmHj+d5EIq5s1A=; b=Q2EXnp642IwouLZIUtkqcU9aXA
	sg8UZuXBLL0mcUtCH+4IIWB7nVVY/cJFHwndXH1yzjdXHcZJvgBI72psKSeax3bB5SxOolbVpBSTo
	AKX6tJng7rvd/LI1AlutlUxV1bzpoClddpcL83A4fenP1ItdpGZfjg/XZjTEF76kGBvs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oluz9-0004fK-ND@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:35:03 +0000

commit cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:29 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:42 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 tools/libs/light/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c        | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 1a3ac1646e..2a5e93c284 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -209,6 +209,18 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               libxl__domain_build_state *state,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1baf25c3d9..9bf72e6930 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -47,11 +47,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:35:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427856.677341 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzL-0001H1-DY; Fri, 21 Oct 2022 16:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427856.677341; Fri, 21 Oct 2022 16:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzL-0001Gt-AW; Fri, 21 Oct 2022 16:35:15 +0000
Received: by outflank-mailman (input) for mailman id 427856;
 Fri, 21 Oct 2022 16:35:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzJ-0001GP-Rm
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzJ-00075O-R1
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzJ-0004gB-QI
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=LnKgOeTKj9amn/s92AY4Nge3AYFhNYAGMNwU8LBpUdg=; b=BmbmGP9v14UkNBK0mSGvIZSEA3
	LCss3+awielHhsoJtkV7ykVdJ7vEl1feMYt7aSkaA+f5+XBu+wlVqV5F+L3joXA72XaIECRssryCf
	8NnMInNDKaKLuxIwKocPmWISyP4aaHdnTwHobFmXBb1nfyQTwnFn1ELw6TYr6IomaE94=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oluzJ-0004gB-QI@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:35:13 +0000

commit cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Mon Jun 6 06:17:30 2022 +0000
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:28:44 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index c47a05e0da..87eaa3e254 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -215,6 +215,14 @@ with the following properties:
     In the future other possible property values might be added to
     enable only selected interfaces.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 746ad3438a..2c84e6dbbb 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1002,6 +1002,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1067,6 +1068,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 40e3c2e119..db97536fe8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -3622,6 +3622,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -3733,6 +3748,8 @@ static int __init construct_domU(struct domain *d,
     const char *dom0less_enhanced;
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -3742,6 +3759,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9bf72e6930..c8fdeb1240 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -50,6 +50,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -66,9 +69,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index db385fe410..f17500ddf3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -44,6 +44,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -747,7 +795,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -877,7 +925,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -901,7 +949,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1665,7 +1713,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1689,6 +1737,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:35:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427857.677345 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzV-0001KL-Gv; Fri, 21 Oct 2022 16:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427857.677345; Fri, 21 Oct 2022 16:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzV-0001KE-E9; Fri, 21 Oct 2022 16:35:25 +0000
Received: by outflank-mailman (input) for mailman id 427857;
 Fri, 21 Oct 2022 16:35:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzT-0001Ju-Ug
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzT-00075Y-U1
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzT-0004gy-TH
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=UT3fdg+SZJ/q+PCfUi5McZMNvGkvE7bx9d0rBKA57q0=; b=1fLy4oJcyJLRsuTYTazDvYbg5v
	P3df5+JXC3H2lLN9FlGQ1wbrUjhnOv7plRnKdoIryXqMUepcWInUSS/kaXL82Z2afL6+pOVL5RYvA
	L1NPs7VQGCeRpg7VkdCppyQFxTPJiSWl0RxrarU8L740AhNUqNMJqP9tOisr/7kc9Ias=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oluzT-0004gy-TH@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:35:23 +0000

commit 6e3aab858eef614a21a782a3b73acc88e74690ea
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:29:30 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:29:30 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/grant_table.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index fba329dcc2..ee7cc496b8 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2622,9 +2622,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2661,11 +2660,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
             reduce_status_for_pin(rd, act, status, readonly);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:35:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:35:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427858.677349 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzf-0001Mw-Ie; Fri, 21 Oct 2022 16:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427858.677349; Fri, 21 Oct 2022 16:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzf-0001Mo-Fj; Fri, 21 Oct 2022 16:35:35 +0000
Received: by outflank-mailman (input) for mailman id 427858;
 Fri, 21 Oct 2022 16:35:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluze-0001MP-1w
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluze-00075c-1I
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluze-0004hk-0U
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RiT1jzMlb90FEfJ9v4oSSxi8DgJ2YSZjyQVJ5pVVUlY=; b=fvPMZl/yN/HdDAzHPSC0xMctuG
	/7tOlBXa3HbmuZ1We6utbJcFzxj1WzJdY7KJ/imlMau8H/C5LKRWKqqYR/evblub+OCII9DWcb6lt
	atfDI2h/lHBVuEN95RMKXYNq1Nepd3mPrn2mGXtjk4Dfp1BGp/QtiQEVFVEeodIxxjBo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86emul: respect NSCB
Message-Id: <E1oluze-0004hk-0U@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:35:34 +0000

commit 87a20c98d9f0f422727fe9b4b9e22c2c43a5cd9c
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:30:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:30:41 2022 +0200

    x86emul: respect NSCB
    
    protmode_load_seg() would better adhere to that "feature" of clearing
    base (and limit) during NULL selector loads.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/x86_emulate/x86_emulate.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index f6778dd493..e38f98b547 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1970,6 +1970,7 @@ amd_like(const struct x86_emulate_ctxt *ctxt)
 #define vcpu_has_tbm()         (ctxt->cpuid->extd.tbm)
 #define vcpu_has_clzero()      (ctxt->cpuid->extd.clzero)
 #define vcpu_has_wbnoinvd()    (ctxt->cpuid->extd.wbnoinvd)
+#define vcpu_has_nscb()        (ctxt->cpuid->extd.nscb)
 
 #define vcpu_has_bmi1()        (ctxt->cpuid->feat.bmi1)
 #define vcpu_has_hle()         (ctxt->cpuid->feat.hle)
@@ -2102,7 +2103,7 @@ protmode_load_seg(
         case x86_seg_tr:
             goto raise_exn;
         }
-        if ( !_amd_like(cp) || !ops->read_segment ||
+        if ( !_amd_like(cp) || vcpu_has_nscb() || !ops->read_segment ||
              ops->read_segment(seg, sreg, ctxt) != X86EMUL_OKAY )
             memset(sreg, 0, sizeof(*sreg));
         else
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:35:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:35:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427860.677353 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzp-0001Px-Jn; Fri, 21 Oct 2022 16:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427860.677353; Fri, 21 Oct 2022 16:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzp-0001Ps-H9; Fri, 21 Oct 2022 16:35:45 +0000
Received: by outflank-mailman (input) for mailman id 427860;
 Fri, 21 Oct 2022 16:35:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzo-0001Pe-55
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzo-00075o-4N
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzo-0004iZ-3c
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=DOrRuCVMUeFuLF6BHzEPrw0GV4YDq8xt1kMQmKqMTwE=; b=WdBLib1QF9qAJpTmLypgEqL8VC
	bFxMPvCKL/FRtUW8S8CVmxWCYO1Kj20IPmBweXe11eBJulDJoU23L8lWQkBd6gmJ0UNHuhN3f8a5m
	Hb9VJ1TAkXNhV/SlNIkJgWEi+XBbKCazlP/SFzVcdAMcno35Ob+SGAfcZfa8GkYDzUY0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] VMX: correct error handling in vmx_create_vmcs()
Message-Id: <E1oluzo-0004iZ-3c@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:35:44 +0000

commit 448d28309f1a966bdc850aff1a637e0b79a03e43
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:57:56 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:57:56 2022 +0200

    VMX: correct error handling in vmx_create_vmcs()
    
    With the addition of vmx_add_msr() calls to construct_vmcs() there are
    now cases where simply freeing the VMCS isn't enough: The MSR bitmap
    page as well as one of the MSR area ones (if it's the 2nd vmx_add_msr()
    which fails) may also need freeing. Switch to using vmx_destroy_vmcs()
    instead.
    
    Fixes: 3bd36952dab6 ("x86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests")
    Fixes: 53a570b28569 ("x86/spec-ctrl: Support IBPB-on-entry")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 4f12fa06ac..a1aca1ec04 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1821,7 +1821,7 @@ int vmx_create_vmcs(struct vcpu *v)
 
     if ( (rc = construct_vmcs(v)) != 0 )
     {
-        vmx_free_vmcs(vmx->vmcs_pa);
+        vmx_destroy_vmcs(v);
         return rc;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:35:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:35:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427861.677356 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzz-0001Si-Ld; Fri, 21 Oct 2022 16:35:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427861.677356; Fri, 21 Oct 2022 16:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oluzz-0001Sa-Ij; Fri, 21 Oct 2022 16:35:55 +0000
Received: by outflank-mailman (input) for mailman id 427861;
 Fri, 21 Oct 2022 16:35:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzy-0001SE-Ai
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzy-00076E-7S
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oluzy-0004jW-6a
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:35:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ghGyO7bMfbe1tg769193kFEKK8XZzRVL5m7Iqk1i7Xc=; b=C5JFs2ybOPP3Xa683sFMSEtowO
	q98E1TqtRBhFrEiiOa6J3PF01sHEbn8wvVn0+FM0sOHq1Hztquhm3o4sVtyhP9ypbSPj+Pu5gTuya
	a/2GgHN+r/0C7e7DjIgvtZQ7A8Ewlv6Nn6blDkiNMPJ4CYZsMDgqtEREBN1GQpi69eZA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/ocaml/xc: Fix code legibility in stub_xc_domain_create()
Message-Id: <E1oluzy-0004jW-6a@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:35:54 +0000

commit 1f232670f806d20471fc4205069448292e2df2df
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Wed Oct 12 11:02:08 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 13 11:41:48 2022 +0100

    tools/ocaml/xc: Fix code legibility in stub_xc_domain_create()
    
    Reposition the defines to match the outer style and to make the logic
    half-legible.
    
    No functional change.
    
    Fixes: 0570d7f276dd ("x86/msr: introduce an option for compatible MSR behavior selection")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 19335bdf45..fe9c00ce00 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -232,22 +232,20 @@ CAMLprim value stub_xc_domain_create(value xch, value wanted_domid, value config
 
         /* Mnemonics for the named fields inside xen_x86_arch_domainconfig */
 #define VAL_EMUL_FLAGS          Field(arch_domconfig, 0)
+#define VAL_MISC_FLAGS          Field(arch_domconfig, 1)
 
 		cfg.arch.emulation_flags = ocaml_list_to_c_bitmap
 			/* ! x86_arch_emulation_flags X86_EMU_ none */
 			/* ! XEN_X86_EMU_ XEN_X86_EMU_ALL all */
 			(VAL_EMUL_FLAGS);
 
-#undef VAL_EMUL_FLAGS
-
-#define VAL_MISC_FLAGS          Field(arch_domconfig, 1)
-
 		cfg.arch.misc_flags = ocaml_list_to_c_bitmap
 			/* ! x86_arch_misc_flags X86_ none */
 			/* ! XEN_X86_ XEN_X86_MISC_FLAGS_MAX max */
 			(VAL_MISC_FLAGS);
 
 #undef VAL_MISC_FLAGS
+#undef VAL_EMUL_FLAGS
 
 #else
 		caml_failwith("Unhandled: x86");
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:36:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:36:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427862.677360 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv08-0001Vm-N0; Fri, 21 Oct 2022 16:36:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427862.677360; Fri, 21 Oct 2022 16:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv08-0001Vf-KG; Fri, 21 Oct 2022 16:36:04 +0000
Received: by outflank-mailman (input) for mailman id 427862;
 Fri, 21 Oct 2022 16:36:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv08-0001VX-BA
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv08-00076b-AT
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv08-0004kH-9c
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=BARX36e97J2Q7mkg3YmfozUrU8STqCrlbMBLFs5YgLY=; b=XmCpGfrFbwv5MpQD5haYagiwwh
	DuQRdrhvz+5EDNmva+DEzyz/8aMtP+xb3A2dAL5SITruy2vodLekwfDCQLfFeTR6/YaslaE86KoAU
	eTkdaD9K/mCJw5QZ6KAF0Iqf7Pp00hAqruZrX2Y/VyByEjWb4iCD2nKWjM4Hi3znANuc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/ocaml/xc: Address ABI issues with physinfo arch flags
Message-Id: <E1olv08-0004kH-9c@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:36:04 +0000

commit 0823d57d71c7023bea94d483f69f7b5e62820102
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Mon Jul 25 18:36:29 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 13 11:45:19 2022 +0100

    tools/ocaml/xc: Address ABI issues with physinfo arch flags
    
    The current bindings function, but the preexisting
    
      type physinfo_arch_cap_flag =
             | X86 of x86_physinfo_arch_cap_flag
    
    is a special case in the Ocaml type system with an unusual indirection, and
    will break when a second option, e.g. `| ARM of ...` is added.
    
    Also, the position the list is logically wrong.  Currently, the types express
    a list of elements which might be an x86 flag or an arm flag (and can
    intermix), whereas what we actually want is either a list of x86 flags, or a
    list of ARM flags (that cannot intermix).
    
    Rework the Ocaml types to avoid the ABI special case and move the list
    primitive, and adjust the C bindings to match.
    
    Fixes: 2ce11ce249a3 ("x86/HVM: allow per-domain usage of hardware virtualized APIC")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      | 10 ++++++----
 tools/ocaml/libs/xc/xenctrl.mli     | 11 +++++++----
 tools/ocaml/libs/xc/xenctrl_stubs.c | 21 +++++++++++----------
 3 files changed, 24 insertions(+), 18 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 0c71e5eef3..28ed642231 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -130,13 +130,15 @@ type physinfo_cap_flag =
 	| CAP_Gnttab_v1
 	| CAP_Gnttab_v2
 
+type arm_physinfo_cap_flag
 
-type x86_physinfo_arch_cap_flag =
+type x86_physinfo_cap_flag =
 	| CAP_X86_ASSISTED_XAPIC
 	| CAP_X86_ASSISTED_X2APIC
 
-type physinfo_arch_cap_flag =
-	| X86 of x86_physinfo_arch_cap_flag
+type arch_physinfo_cap_flags =
+	| ARM of arm_physinfo_cap_flag list
+	| X86 of x86_physinfo_cap_flag list
 
 type physinfo =
 {
@@ -151,7 +153,7 @@ type physinfo =
 	(* XXX hw_cap *)
 	capabilities     : physinfo_cap_flag list;
 	max_nr_cpus      : int;
-	arch_capabilities : physinfo_arch_cap_flag list;
+	arch_capabilities : arch_physinfo_cap_flags;
 }
 
 type version =
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index a8458e19ca..c2076d60c9 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -115,12 +115,15 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type x86_physinfo_arch_cap_flag =
+type arm_physinfo_cap_flag
+
+type x86_physinfo_cap_flag =
   | CAP_X86_ASSISTED_XAPIC
   | CAP_X86_ASSISTED_X2APIC
 
-type physinfo_arch_cap_flag =
-  | X86 of x86_physinfo_arch_cap_flag
+type arch_physinfo_cap_flags =
+  | ARM of arm_physinfo_cap_flag list
+  | X86 of x86_physinfo_cap_flag list
 
 type physinfo = {
   threads_per_core : int;
@@ -133,7 +136,7 @@ type physinfo = {
   scrub_pages      : nativeint;
   capabilities     : physinfo_cap_flag list;
   max_nr_cpus      : int; (** compile-time max possible number of nr_cpus *)
-  arch_capabilities : physinfo_arch_cap_flag list;
+  arch_capabilities : arch_physinfo_cap_flags;
 }
 type version = { major : int; minor : int; extra : string; }
 type compile_info = {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index fe9c00ce00..a8789d19be 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -716,9 +716,9 @@ CAMLprim value stub_xc_send_debug_keys(value xch, value keys)
 CAMLprim value stub_xc_physinfo(value xch)
 {
 	CAMLparam1(xch);
-	CAMLlocal4(physinfo, cap_list, x86_arch_cap_list, arch_cap_list);
+	CAMLlocal4(physinfo, cap_list, arch_cap_flags, arch_cap_list);
 	xc_physinfo_t c_physinfo;
-	int r;
+	int r, arch_cap_flags_tag;
 
 	caml_enter_blocking_section();
 	r = xc_physinfo(_H(xch), &c_physinfo);
@@ -748,18 +748,19 @@ CAMLprim value stub_xc_physinfo(value xch)
 	Store_field(physinfo, 9, Val_int(c_physinfo.max_cpu_id + 1));
 
 #if defined(__i386__) || defined(__x86_64__)
-	x86_arch_cap_list = c_bitmap_to_ocaml_list
-		/* ! x86_physinfo_arch_cap_flag CAP_X86_ none */
+	arch_cap_list = c_bitmap_to_ocaml_list
+		/* ! x86_physinfo_cap_flag CAP_X86_ none */
 		/* ! XEN_SYSCTL_PHYSCAP_X86_ XEN_SYSCTL_PHYSCAP_X86_MAX max */
 		(c_physinfo.arch_capabilities);
-	/*
-	 * arch_capabilities: physinfo_arch_cap_flag list;
-	 */
-	arch_cap_list = x86_arch_cap_list;
+
+	arch_cap_flags_tag = 1; /* tag x86 */
 #else
-	arch_cap_list = Val_emptylist;
+	caml_failwith("Unhandled architecture");
 #endif
-	Store_field(physinfo, 10, arch_cap_list);
+
+	arch_cap_flags = caml_alloc_small(1, arch_cap_flags_tag);
+	Store_field(arch_cap_flags, 0, arch_cap_list);
+	Store_field(physinfo, 10, arch_cap_flags);
 
 	CAMLreturn(physinfo);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:36:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427863.677365 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0I-0001YR-Oq; Fri, 21 Oct 2022 16:36:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427863.677365; Fri, 21 Oct 2022 16:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0I-0001YJ-M5; Fri, 21 Oct 2022 16:36:14 +0000
Received: by outflank-mailman (input) for mailman id 427863;
 Fri, 21 Oct 2022 16:36:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0I-0001YB-EF
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0I-00076s-DU
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0I-0004ks-Cp
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Kj4+jCLuEj9/aF/UZS9d7cHSanNKqsZU8b+qxymgWc4=; b=5z1Kvq56Ry+xaiQEnuZiy3SIAM
	fwc6krslwX96Y1ZAmlnniJcDThHSZ/LmVhEA9Wox9tJpQ4E2iHughEcs38/CsTcgAC41UE8/+EWME
	8o8Pa7UQoS3aW0yzHhhD7WCxaTWHWeFuXVVIN6T0Qd7cGEbmA5nL/eZ5QJtaITZxXzVU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/mwait-idle: add 'preferred-cstates' command line option
Message-Id: <E1olv0I-0004ks-Cp@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:36:14 +0000

commit 9fc9a5c21612993fbd2bb1acdd68d9181ab6f7d2
Author:     Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
AuthorDate: Thu Oct 13 17:52:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:52:36 2022 +0200

    x86/mwait-idle: add 'preferred-cstates' command line option
    
    On Sapphire Rapids Xeon (SPR) the C1 and C1E states are basically mutually
    exclusive - only one of them can be enabled. By default, 'intel_idle' driver
    enables C1 and disables C1E. However, some users prefer to use C1E instead of
    C1, because it saves more energy.
    
    This patch adds a new module parameter ('preferred_cstates') for enabling C1E
    and disabling C1. Here is the idea behind it.
    
    1. This option has effect only for "mutually exclusive" C-states like C1 and
       C1E on SPR.
    2. It does not have any effect on independent C-states, which do not require
       other C-states to be disabled (most states on most platforms as of today).
    3. For mutually exclusive C-states, the 'intel_idle' driver always has a
       reasonable default, such as enabling C1 on SPR by default. On other
       platforms, the default may be different.
    4. Users can override the default using the 'preferred_cstates' parameter.
    5. The parameter accepts the preferred C-states bit-mask, similarly to the
       existing 'states_off' parameter.
    6. This parameter is not limited to C1/C1E, and leaves room for supporting
       other mutually exclusive C-states, if they come in the future.
    
    Today 'intel_idle' can only be compiled-in, which means that on SPR, in order
    to disable C1 and enable C1E, users should boot with the following kernel
    argument: intel_idle.preferred_cstates=4
    
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git da0e58c038e6
    
    Enable C1E (if requested) not only on the BSP's socket / package. Alter
    command line option to fit our model, and extend it to also accept
    string form arguments.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 docs/misc/xen-command-line.pandoc |   6 ++
 xen/arch/x86/cpu/mwait-idle.c     | 132 ++++++++++++++++++++++++++++++++------
 2 files changed, 119 insertions(+), 19 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 68389843b2..0fbdcb574f 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1926,6 +1926,12 @@ paging controls access to usermode addresses.
 ### ple_window (Intel)
 > `= <integer>`
 
+### preferred-cstates (x86)
+> `= ( <integer> | List of ( C1 | C1E | C2 | ... )`
+
+This is a mask of C-states which are to be used preferably.  This option is
+applicable only on hardware were certain C-states are exclusive of one another.
+
 ### psr (Intel)
 > `= List of ( cmt:<boolean> | rmid_max:<integer> | cat:<boolean> | cos_max:<integer> | cdp:<boolean> )`
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 5d77672f6b..cc62ddf743 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -82,10 +82,29 @@ boolean_param("mwait-idle", opt_mwait_idle);
 
 static unsigned int mwait_substates;
 
+/*
+ * Some platforms come with mutually exclusive C-states, so that if one is
+ * enabled, the other C-states must not be used. Example: C1 and C1E on
+ * Sapphire Rapids platform. This parameter allows for selecting the
+ * preferred C-states among the groups of mutually exclusive C-states - the
+ * selected C-states will be registered, the other C-states from the mutually
+ * exclusive group won't be registered. If the platform has no mutually
+ * exclusive C-states, this parameter has no effect.
+ */
+static unsigned int __ro_after_init preferred_states_mask;
+static char __initdata preferred_states[64];
+string_param("preferred-cstates", preferred_states);
+
 #define LAPIC_TIMER_ALWAYS_RELIABLE 0xFFFFFFFF
 /* Reliable LAPIC Timer States, bit 1 for C1 etc. Default to only C1. */
 static unsigned int lapic_timer_reliable_states = (1 << 1);
 
+enum c1e_promotion {
+	C1E_PROMOTION_PRESERVE,
+	C1E_PROMOTION_ENABLE,
+	C1E_PROMOTION_DISABLE
+};
+
 struct idle_cpu {
 	const struct cpuidle_state *state_table;
 
@@ -95,7 +114,7 @@ struct idle_cpu {
 	 */
 	unsigned long auto_demotion_disable_flags;
 	bool byt_auto_demotion_disable_flag;
-	bool disable_promotion_to_c1e;
+	enum c1e_promotion c1e_promotion;
 };
 
 static const struct idle_cpu *icpu;
@@ -924,6 +943,15 @@ static void cf_check byt_auto_demotion_disable(void *dummy)
 	wrmsrl(MSR_MC6_DEMOTION_POLICY_CONFIG, 0);
 }
 
+static void cf_check c1e_promotion_enable(void *dummy)
+{
+	uint64_t msr_bits;
+
+	rdmsrl(MSR_IA32_POWER_CTL, msr_bits);
+	msr_bits |= 0x2;
+	wrmsrl(MSR_IA32_POWER_CTL, msr_bits);
+}
+
 static void cf_check c1e_promotion_disable(void *dummy)
 {
 	u64 msr_bits;
@@ -936,7 +964,7 @@ static void cf_check c1e_promotion_disable(void *dummy)
 static const struct idle_cpu idle_cpu_nehalem = {
 	.state_table = nehalem_cstates,
 	.auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_atom = {
@@ -954,64 +982,64 @@ static const struct idle_cpu idle_cpu_lincroft = {
 
 static const struct idle_cpu idle_cpu_snb = {
 	.state_table = snb_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_byt = {
 	.state_table = byt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 	.byt_auto_demotion_disable_flag = true,
 };
 
 static const struct idle_cpu idle_cpu_cht = {
 	.state_table = cht_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 	.byt_auto_demotion_disable_flag = true,
 };
 
 static const struct idle_cpu idle_cpu_ivb = {
 	.state_table = ivb_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_ivt = {
 	.state_table = ivt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_hsw = {
 	.state_table = hsw_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_bdw = {
 	.state_table = bdw_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_skl = {
 	.state_table = skl_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_skx = {
 	.state_table = skx_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_icx = {
-       .state_table = icx_cstates,
-       .disable_promotion_to_c1e = true,
+	.state_table = icx_cstates,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static struct idle_cpu __read_mostly idle_cpu_spr = {
 	.state_table = spr_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_avn = {
 	.state_table = avn_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_knl = {
@@ -1020,17 +1048,17 @@ static const struct idle_cpu idle_cpu_knl = {
 
 static const struct idle_cpu idle_cpu_bxt = {
 	.state_table = bxt_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_dnv = {
 	.state_table = dnv_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 static const struct idle_cpu idle_cpu_snr = {
 	.state_table = snr_cstates,
-	.disable_promotion_to_c1e = true,
+	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
 #define ICPU(model, cpu) \
@@ -1240,6 +1268,25 @@ static void __init skx_idle_state_table_update(void)
 	}
 }
 
+/*
+ * spr_idle_state_table_update - Adjust Sapphire Rapids idle states table.
+ */
+static void __init spr_idle_state_table_update(void)
+{
+	/* Check if user prefers C1E over C1. */
+	if (preferred_states_mask & BIT(2, U)) {
+		if (preferred_states_mask & BIT(1, U))
+			/* Both can't be enabled, stick to the defaults. */
+			return;
+
+		spr_cstates[0].flags |= CPUIDLE_FLAG_DISABLED;
+		spr_cstates[1].flags &= ~CPUIDLE_FLAG_DISABLED;
+
+		/* Request enabling C1E using the "C1E promotion" bit. */
+		idle_cpu_spr.c1e_promotion = C1E_PROMOTION_ENABLE;
+	}
+}
+
 /*
  * mwait_idle_state_table_update()
  *
@@ -1261,6 +1308,9 @@ static void __init mwait_idle_state_table_update(void)
 	case INTEL_FAM6_SKYLAKE_X:
 		skx_idle_state_table_update();
 		break;
+	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+		spr_idle_state_table_update();
+		break;
 	}
 }
 
@@ -1268,6 +1318,7 @@ static int __init mwait_idle_probe(void)
 {
 	unsigned int eax, ebx, ecx;
 	const struct x86_cpu_id *id = x86_match_cpu(intel_idle_ids);
+	const char *str;
 
 	if (!id) {
 		pr_debug(PREFIX "does not run on family %d model %d\n",
@@ -1309,6 +1360,39 @@ static int __init mwait_idle_probe(void)
 	pr_debug(PREFIX "lapic_timer_reliable_states %#x\n",
 		 lapic_timer_reliable_states);
 
+	str = preferred_states;
+	if (isdigit(str[0]))
+		preferred_states_mask = simple_strtoul(str, &str, 0);
+	else if (str[0])
+	{
+		const char *ss;
+
+		do {
+			const struct cpuidle_state *state = icpu->state_table;
+			unsigned int bit = 1;
+
+			ss = strchr(str, ',');
+			if (!ss)
+				ss = strchr(str, '\0');
+
+			for (; state->name[0]; ++state) {
+				bit <<= 1;
+				if (!cmdline_strcmp(str, state->name)) {
+					preferred_states_mask |= bit;
+					break;
+				}
+			}
+			if (!state->name[0])
+				break;
+
+			str = ss + 1;
+		} while (*ss);
+
+		str -= str == ss + 1;
+	}
+	if (str[0])
+		printk("unrecognized \"preferred-cstates=%s\"\n", str);
+
 	mwait_idle_state_table_update();
 
 	return 0;
@@ -1400,8 +1484,18 @@ static int cf_check mwait_idle_cpu_init(
 	if (icpu->byt_auto_demotion_disable_flag)
 		on_selected_cpus(cpumask_of(cpu), byt_auto_demotion_disable, NULL, 1);
 
-	if (icpu->disable_promotion_to_c1e)
+	switch (icpu->c1e_promotion) {
+	case C1E_PROMOTION_DISABLE:
 		on_selected_cpus(cpumask_of(cpu), c1e_promotion_disable, NULL, 1);
+		break;
+
+	case C1E_PROMOTION_ENABLE:
+		on_selected_cpus(cpumask_of(cpu), c1e_promotion_enable, NULL, 1);
+		break;
+
+	case C1E_PROMOTION_PRESERVE:
+		break;
+	}
 
 	return NOTIFY_DONE;
 }
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:36:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:36:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427864.677368 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0T-0001bX-S7; Fri, 21 Oct 2022 16:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427864.677368; Fri, 21 Oct 2022 16:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0T-0001bN-PE; Fri, 21 Oct 2022 16:36:25 +0000
Received: by outflank-mailman (input) for mailman id 427864;
 Fri, 21 Oct 2022 16:36:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0S-0001bD-HL
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0S-000772-GZ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0S-0004lh-Fp
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=9masArvRw+FDojA9g9QK8S2aPw9NyFGRtBRa10//sPQ=; b=TZX3/NtjQeT8X8EHZ3P4WpqkR8
	IYfINqFjzb7vd4lVtZy4IJYKumssMFOKh/PzEdSb7tOjw3DliIHLR4hhKF4vyOKjxu1U2XVCUoUzE
	XFIXMJUyLM9O1kwPmwceDbCTmgfcYuVnnqUjpIMVXBGgnTuYTQ/A0/rS6OWKbzSLhJK4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/mwait-idle: add core C6 optimization for SPR
Message-Id: <E1olv0S-0004lh-Fp@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:36:24 +0000

commit 13ecd1c216433125836c0516219a0854640eeeed
Author:     Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
AuthorDate: Thu Oct 13 17:53:26 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:53:26 2022 +0200

    x86/mwait-idle: add core C6 optimization for SPR
    
    From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    
    Add a Sapphire Rapids Xeon C6 optimization, similar to what we have for Sky Lake
    Xeon: if package C6 is disabled, adjust C6 exit latency and target residency to
    match core C6 values, instead of using the default package C6 values.
    
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 3a9cf77b60dc
    
    Make sure a contradictory "preferred-cstates" wouldn't cause bypassing
    of the added logic.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index cc62ddf743..17d756881a 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -1273,18 +1273,31 @@ static void __init skx_idle_state_table_update(void)
  */
 static void __init spr_idle_state_table_update(void)
 {
-	/* Check if user prefers C1E over C1. */
-	if (preferred_states_mask & BIT(2, U)) {
-		if (preferred_states_mask & BIT(1, U))
-			/* Both can't be enabled, stick to the defaults. */
-			return;
+	uint64_t msr;
 
+	/* Check if user prefers C1E over C1. */
+	if ((preferred_states_mask & BIT(2, U)) &&
+	    !(preferred_states_mask & BIT(1, U))) {
+		/* Disable C1 and enable C1E. */
 		spr_cstates[0].flags |= CPUIDLE_FLAG_DISABLED;
 		spr_cstates[1].flags &= ~CPUIDLE_FLAG_DISABLED;
 
 		/* Request enabling C1E using the "C1E promotion" bit. */
 		idle_cpu_spr.c1e_promotion = C1E_PROMOTION_ENABLE;
 	}
+
+	/*
+	 * By default, the C6 state assumes the worst-case scenario of package
+	 * C6. However, if PC6 is disabled, we update the numbers to match
+	 * core C6.
+	 */
+	rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr);
+
+	/* Limit value 2 and above allow for PC6. */
+	if ((msr & 0x7) < 2) {
+		spr_cstates[2].exit_latency = 190;
+		spr_cstates[2].target_residency = 600;
+	}
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:36:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:36:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427865.677373 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0d-0001el-TJ; Fri, 21 Oct 2022 16:36:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427865.677373; Fri, 21 Oct 2022 16:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0d-0001ed-Qk; Fri, 21 Oct 2022 16:36:35 +0000
Received: by outflank-mailman (input) for mailman id 427865;
 Fri, 21 Oct 2022 16:36:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0c-0001eP-KH
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0c-00077A-JW
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0c-0004mY-Il
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=bqglpeRxe9UdJ0gLOvy93873Aw+LiVO7YjnE4q2AukU=; b=YexiVzZnngoJgz1Rrw1LwZfG19
	bkwmgGqR6N4jOr7MCZ+dH+Lfreyw3e5/5K7rLNwrxFx3VJ0WfHh6pXsVORBjyQ2W+JG1HOJXTRwWH
	NsdxA709iYYO5VDT41auvnBv1DCfhUbWUNPby690gQ5FvyYKepvfntPC2d894A5QStAM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/mwait-idle: add AlderLake support
Message-Id: <E1olv0c-0004mY-Il@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:36:34 +0000

commit 0fa9c3ef1e9196e8cd38c1532d29cf670dc21bcb
Author:     Zhang Rui <rui.zhang@intel.com>
AuthorDate: Thu Oct 13 17:54:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:54:23 2022 +0200

    x86/mwait-idle: add AlderLake support
    
    Similar to SPR, the C1 and C1E states on ADL are mutually exclusive.
    Only one of them can be enabled at a time.
    
    But contrast to SPR, which usually has a strong latency requirement
    as a Xeon processor, C1E is preferred on ADL for better energy
    efficiency.
    
    Add custom C-state tables for ADL with both C1 and C1E, and
    
     1. Enable the "C1E promotion" bit in MSR_IA32_POWER_CTL and mark C1
        with the CPUIDLE_FLAG_UNUSABLE flag, so C1 is not available by
        default.
    
     2. Add support for the "preferred_cstates" module parameter, so that
        users can choose to use C1 instead of C1E by booting with
        "intel_idle.preferred_cstates=2".
    
    Separate custom C-state tables are introduced for the ADL mobile and
    desktop processors, because of the exit latency differences between
    these two variants, especially with respect to PC10.
    
    Signed-off-by: Zhang Rui <rui.zhang@intel.com>
    [ rjw: Changelog edits, code rearrangement ]
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git d1cf8bbfed1e
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 116 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 116 insertions(+)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 17d756881a..86c47a04c7 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -605,6 +605,84 @@ static const struct cpuidle_state icx_cstates[] = {
        {}
 };
 
+/*
+ * On AlderLake C1 has to be disabled if C1E is enabled, and vice versa.
+ * C1E is enabled only if "C1E promotion" bit is set in MSR_IA32_POWER_CTL.
+ * But in this case there is effectively no C1, because C1 requests are
+ * promoted to C1E. If the "C1E promotion" bit is cleared, then both C1
+ * and C1E requests end up with C1, so there is effectively no C1E.
+ *
+ * By default we enable C1E and disable C1 by marking it with
+ * 'CPUIDLE_FLAG_DISABLED'.
+ */
+static struct cpuidle_state __read_mostly adl_cstates[] = {
+	{
+		.name = "C1",
+		.flags = MWAIT2flg(0x00) | CPUIDLE_FLAG_DISABLED,
+		.exit_latency = 1,
+		.target_residency = 1,
+	},
+	{
+		.name = "C1E",
+		.flags = MWAIT2flg(0x01),
+		.exit_latency = 2,
+		.target_residency = 4,
+	},
+	{
+		.name = "C6",
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 220,
+		.target_residency = 600,
+	},
+	{
+		.name = "C8",
+		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 280,
+		.target_residency = 800,
+	},
+	{
+		.name = "C10",
+		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 680,
+		.target_residency = 2000,
+	},
+	{}
+};
+
+static struct cpuidle_state __read_mostly adl_l_cstates[] = {
+	{
+		.name = "C1",
+		.flags = MWAIT2flg(0x00) | CPUIDLE_FLAG_DISABLED,
+		.exit_latency = 1,
+		.target_residency = 1,
+	},
+	{
+		.name = "C1E",
+		.flags = MWAIT2flg(0x01),
+		.exit_latency = 2,
+		.target_residency = 4,
+	},
+	{
+		.name = "C6",
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 170,
+		.target_residency = 500,
+	},
+	{
+		.name = "C8",
+		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 200,
+		.target_residency = 600,
+	},
+	{
+		.name = "C10",
+		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.exit_latency = 230,
+		.target_residency = 700,
+	},
+	{}
+};
+
 /*
  * On Sapphire Rapids Xeon C1 has to be disabled if C1E is enabled, and vice
  * versa. On SPR C1E is enabled only if "C1E promotion" bit is set in
@@ -1032,6 +1110,14 @@ static const struct idle_cpu idle_cpu_icx = {
 	.c1e_promotion = C1E_PROMOTION_DISABLE,
 };
 
+static struct idle_cpu __read_mostly idle_cpu_adl = {
+	.state_table = adl_cstates,
+};
+
+static struct idle_cpu __read_mostly idle_cpu_adl_l = {
+	.state_table = adl_l_cstates,
+};
+
 static struct idle_cpu __read_mostly idle_cpu_spr = {
 	.state_table = spr_cstates,
 	.c1e_promotion = C1E_PROMOTION_DISABLE,
@@ -1099,6 +1185,8 @@ static const struct x86_cpu_id intel_idle_ids[] __initconstrel = {
 	ICPU(SKYLAKE_X,			skx),
 	ICPU(ICELAKE_X,			icx),
 	ICPU(ICELAKE_D,			icx),
+	ICPU(ALDERLAKE,			adl),
+	ICPU(ALDERLAKE_L,		adl_l),
 	ICPU(SAPPHIRERAPIDS_X,		spr),
 	ICPU(XEON_PHI_KNL,		knl),
 	ICPU(XEON_PHI_KNM,		knl),
@@ -1268,6 +1356,30 @@ static void __init skx_idle_state_table_update(void)
 	}
 }
 
+/*
+ * adl_idle_state_table_update - Adjust AlderLake idle states table.
+ */
+static void __init adl_idle_state_table_update(void)
+{
+	/* Check if user prefers C1 over C1E. */
+	if ((preferred_states_mask & BIT(1, U)) &&
+	    !(preferred_states_mask & BIT(2, U))) {
+		adl_cstates[0].flags &= ~CPUIDLE_FLAG_DISABLED;
+		adl_cstates[1].flags |= CPUIDLE_FLAG_DISABLED;
+		adl_l_cstates[0].flags &= ~CPUIDLE_FLAG_DISABLED;
+		adl_l_cstates[1].flags |= CPUIDLE_FLAG_DISABLED;
+
+		/* Disable C1E by clearing the "C1E promotion" bit. */
+		idle_cpu_adl.c1e_promotion = C1E_PROMOTION_DISABLE;
+		idle_cpu_adl_l.c1e_promotion = C1E_PROMOTION_DISABLE;
+		return;
+	}
+
+	/* Make sure C1E is enabled by default */
+	idle_cpu_adl.c1e_promotion = C1E_PROMOTION_ENABLE;
+	idle_cpu_adl_l.c1e_promotion = C1E_PROMOTION_ENABLE;
+}
+
 /*
  * spr_idle_state_table_update - Adjust Sapphire Rapids idle states table.
  */
@@ -1324,6 +1436,10 @@ static void __init mwait_idle_state_table_update(void)
 	case INTEL_FAM6_SAPPHIRERAPIDS_X:
 		spr_idle_state_table_update();
 		break;
+	case INTEL_FAM6_ALDERLAKE:
+	case INTEL_FAM6_ALDERLAKE_L:
+		adl_idle_state_table_update();
+		break;
 	}
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:36:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:36:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427866.677377 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0n-0001hT-VH; Fri, 21 Oct 2022 16:36:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427866.677377; Fri, 21 Oct 2022 16:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0n-0001hM-SD; Fri, 21 Oct 2022 16:36:45 +0000
Received: by outflank-mailman (input) for mailman id 427866;
 Fri, 21 Oct 2022 16:36:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0m-0001gw-N9
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0m-00078r-MM
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0m-0004nV-Li
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=vshHbC2QOOK076/K2K5Wc98aJt+Lf5MTVYA+n3U8nOI=; b=s2YtmLU9C0efpfrK3ZUcvpdnd9
	VDJspG+ktTaYP1YNb4eB7L754smu2IPvR+gB7VqzTSjQHmFtiKSB/g6fKJZUulZoSTdoKzqtfbFKj
	u1gswpzvYWU5NNCn0k66t44zNFoAWXDO+i1bIbR+8dGNnoXopobzmxX/MODjBtLf5+0c=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/mwait-idle: disable IBRS during long idle
Message-Id: <E1olv0m-0004nV-Li@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:36:44 +0000

commit 08acdf9a26153130d7fa47925ceb53c39fcb87da
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu Oct 13 17:55:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:55:22 2022 +0200

    x86/mwait-idle: disable IBRS during long idle
    
    Having IBRS enabled while the SMT sibling is idle unnecessarily slows
    down the running sibling. OTOH, disabling IBRS around idle takes two
    MSR writes, which will increase the idle latency.
    
    Therefore, only disable IBRS around deeper idle states. Shallow idle
    states are bounded by the tick in duration, since NOHZ is not allowed
    for them by virtue of their short target residency.
    
    Only do this for mwait-driven idle, since that keeps interrupts disabled
    across idle, which makes disabling IBRS vs IRQ-entry a non-issue.
    
    Note: C6 is a random threshold, most importantly C1 probably shouldn't
    disable IBRS, benchmarking needed.
    
    Suggested-by: Tim Chen <tim.c.chen@linux.intel.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Signed-off-by: Borislav Petkov <bp@suse.de>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git bf5835bcdb96
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 32 ++++++++++++++++++++++++--------
 xen/include/xen/cpuidle.h     |  3 ++-
 2 files changed, 26 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 86c47a04c7..f5c83121a8 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -140,6 +140,12 @@ static const struct cpuidle_state {
  */
 #define CPUIDLE_FLAG_TLB_FLUSHED	0x10000
 
+/*
+ * Disable IBRS across idle (when KERNEL_IBRS), is exclusive vs IRQ_ENABLE
+ * above.
+ */
+#define CPUIDLE_FLAG_IBRS		0x20000
+
 /*
  * MWAIT takes an 8-bit "hint" in EAX "suggesting"
  * the C-state (top nibble) and sub-state (bottom nibble)
@@ -530,31 +536,31 @@ static struct cpuidle_state __read_mostly skl_cstates[] = {
 	},
 	{
 		.name = "C6",
-		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 85,
 		.target_residency = 200,
 	},
 	{
 		.name = "C7s",
-		.flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 124,
 		.target_residency = 800,
 	},
 	{
 		.name = "C8",
-		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 200,
 		.target_residency = 800,
 	},
 	{
 		.name = "C9",
-		.flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 480,
 		.target_residency = 5000,
 	},
 	{
 		.name = "C10",
-		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 890,
 		.target_residency = 5000,
 	},
@@ -576,7 +582,7 @@ static struct cpuidle_state __read_mostly skx_cstates[] = {
 	},
 	{
 		.name = "C6",
-		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+		.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS,
 		.exit_latency = 133,
 		.target_residency = 600,
 	},
@@ -906,6 +912,7 @@ static const struct cpuidle_state snr_cstates[] = {
 static void cf_check mwait_idle(void)
 {
 	unsigned int cpu = smp_processor_id();
+	struct cpu_info *info = get_cpu_info();
 	struct acpi_processor_power *power = processor_powers[cpu];
 	struct acpi_processor_cx *cx = NULL;
 	unsigned int next_state;
@@ -932,8 +939,6 @@ static void cf_check mwait_idle(void)
 			pm_idle_save();
 		else
 		{
-			struct cpu_info *info = get_cpu_info();
-
 			spec_ctrl_enter_idle(info);
 			safe_halt();
 			spec_ctrl_exit_idle(info);
@@ -960,6 +965,11 @@ static void cf_check mwait_idle(void)
 	if ((cx->type >= 3) && errata_c6_workaround())
 		cx = power->safe_state;
 
+	if (cx->ibrs_disable) {
+		ASSERT(!cx->irq_enable_early);
+		spec_ctrl_enter_idle(info);
+	}
+
 #if 0 /* XXX Can we/do we need to do something similar on Xen? */
 	/*
 	 * leave_mm() to avoid costly and often unnecessary wakeups
@@ -991,6 +1001,10 @@ static void cf_check mwait_idle(void)
 
 	/* Now back in C0. */
 	update_idle_stats(power, cx, before, after);
+
+	if (cx->ibrs_disable)
+		spec_ctrl_exit_idle(info);
+
 	local_irq_enable();
 
 	TRACE_6D(TRC_PM_IDLE_EXIT, cx->type, after,
@@ -1603,6 +1617,8 @@ static int cf_check mwait_idle_cpu_init(
 		    /* cstate_restore_tsc() needs to be a no-op */
 		    boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
 			cx->irq_enable_early = true;
+		if (cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IBRS)
+			cx->ibrs_disable = true;
 
 		dev->count++;
 	}
diff --git a/xen/include/xen/cpuidle.h b/xen/include/xen/cpuidle.h
index bd24a31e12..521a8deb04 100644
--- a/xen/include/xen/cpuidle.h
+++ b/xen/include/xen/cpuidle.h
@@ -42,7 +42,8 @@ struct acpi_processor_cx
     u8 idx;
     u8 type;         /* ACPI_STATE_Cn */
     u8 entry_method; /* ACPI_CSTATE_EM_xxx */
-    bool irq_enable_early;
+    bool irq_enable_early:1;
+    bool ibrs_disable:1;
     u32 address;
     u32 latency;
     u32 target_residency;
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:36:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:36:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427867.677382 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0y-0001kO-0e; Fri, 21 Oct 2022 16:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427867.677382; Fri, 21 Oct 2022 16:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv0x-0001kG-U1; Fri, 21 Oct 2022 16:36:55 +0000
Received: by outflank-mailman (input) for mailman id 427867;
 Fri, 21 Oct 2022 16:36:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0w-0001k6-Q1
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0w-00079K-PF
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv0w-0004oH-OU
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:36:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Kp1bZUA93lVChTr9w0A5GVxGiqhJDB6oLpeynrM7KTY=; b=w1gk7ZLb6GVkPyhDlAAdr3soLR
	GyywlXnFHuVI06Y9d1Mup25BAwRGJgRRMW3iKvmobZ+IFKnDr6gMejBAv7kfFCpStvH1E/BcoWrFe
	WCJHXs6+AOpND5a82tx4rW6ScNUC2KbRbkeO6A4r2Os+S/IP9CBVCHzBH2gNshFJtTjU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/mwait-idle: make SPR C1 and C1E be independent
Message-Id: <E1olv0w-0004oH-OU@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:36:54 +0000

commit 171d4d24f829075cac83b6fafe7a4ed7c93935a6
Author:     Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
AuthorDate: Thu Oct 13 17:56:13 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 13 17:56:13 2022 +0200

    x86/mwait-idle: make SPR C1 and C1E be independent
    
    This patch partially reverts the changes made by the following commit:
    
    da0e58c038e6 intel_idle: add 'preferred_cstates' module argument
    
    As that commit describes, on early Sapphire Rapids Xeon platforms the C1 and
    C1E states were mutually exclusive, so that users could only have either C1 and
    C6, or C1E and C6.
    
    However, Intel firmware engineers managed to remove this limitation and make C1
    and C1E to be completely independent, just like on previous Xeon platforms.
    
    Therefore, this patch:
     * Removes commentary describing the old, and now non-existing SPR C1E
       limitation.
     * Marks SPR C1E as available by default.
     * Removes the 'preferred_cstates' parameter handling for SPR. Both C1 and
       C1E will be available regardless of 'preferred_cstates' value.
    
    We expect that all SPR systems are shipping with new firmware, which includes
    the C1/C1E improvement.
    
    Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 1548fac47a11
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index f5c83121a8..ffdc6fb2fc 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -689,16 +689,6 @@ static struct cpuidle_state __read_mostly adl_l_cstates[] = {
 	{}
 };
 
-/*
- * On Sapphire Rapids Xeon C1 has to be disabled if C1E is enabled, and vice
- * versa. On SPR C1E is enabled only if "C1E promotion" bit is set in
- * MSR_IA32_POWER_CTL. But in this case there effectively no C1, because C1
- * requests are promoted to C1E. If the "C1E promotion" bit is cleared, then
- * both C1 and C1E requests end up with C1, so there is effectively no C1E.
- *
- * By default we enable C1 and disable C1E by marking it with
- * 'CPUIDLE_FLAG_DISABLED'.
- */
 static struct cpuidle_state __read_mostly spr_cstates[] = {
 	{
 		.name = "C1",
@@ -708,7 +698,7 @@ static struct cpuidle_state __read_mostly spr_cstates[] = {
 	},
 	{
 		.name = "C1E",
-		.flags = MWAIT2flg(0x01) | CPUIDLE_FLAG_DISABLED,
+		.flags = MWAIT2flg(0x01),
 		.exit_latency = 2,
 		.target_residency = 4,
 	},
@@ -1401,17 +1391,6 @@ static void __init spr_idle_state_table_update(void)
 {
 	uint64_t msr;
 
-	/* Check if user prefers C1E over C1. */
-	if ((preferred_states_mask & BIT(2, U)) &&
-	    !(preferred_states_mask & BIT(1, U))) {
-		/* Disable C1 and enable C1E. */
-		spr_cstates[0].flags |= CPUIDLE_FLAG_DISABLED;
-		spr_cstates[1].flags &= ~CPUIDLE_FLAG_DISABLED;
-
-		/* Request enabling C1E using the "C1E promotion" bit. */
-		idle_cpu_spr.c1e_promotion = C1E_PROMOTION_ENABLE;
-	}
-
 	/*
 	 * By default, the C6 state assumes the worst-case scenario of package
 	 * C6. However, if PC6 is disabled, we update the numbers to match
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:37:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:37:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427868.677385 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv18-0001oi-27; Fri, 21 Oct 2022 16:37:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427868.677385; Fri, 21 Oct 2022 16:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv17-0001oa-VX; Fri, 21 Oct 2022 16:37:05 +0000
Received: by outflank-mailman (input) for mailman id 427868;
 Fri, 21 Oct 2022 16:37:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv16-0001oQ-TT
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv16-00079b-Sh
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv16-0004pO-Ro
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=NGogFJYmjfwN9XUIDk2KsbjRLv23vQUjB2MtOtqViIU=; b=pAjn3yspT8eVDHdYCQ9NuB/lBR
	HZYijDuUq6BWH44P0AZYI5GF5w4qM2uLDH0uB9jZkb5dIDu9iYFInuIQVk+kAiumx2e5aEqfTfHTl
	lxx3Fxn3xQ/rvyIXPYRvTVCk1bvuAJHj4rXuet4850bsvT4GNf9rMfkW6zwbaG27gIfk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] argo: Remove reachable ASSERT_UNREACHABLE
Message-Id: <E1olv16-0004pO-Ro@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:37:04 +0000

commit 197f612b77c5afe04e60df2100a855370d720ad7
Author:     Jason Andryuk <jandryuk@gmail.com>
AuthorDate: Fri Oct 7 15:31:24 2022 -0400
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 14:45:41 2022 +0100

    argo: Remove reachable ASSERT_UNREACHABLE
    
    I observed this ASSERT_UNREACHABLE in partner_rings_remove consistently
    trip.  It was in OpenXT with the viptables patch applied.
    
    dom10 shuts down.
    dom7 is REJECTED sending to dom10.
    dom7 shuts down and this ASSERT trips for dom10.
    
    The argo_send_info has a domid, but there is no refcount taken on
    the domain.  Therefore it's not appropriate to ASSERT that the domain
    can be looked up via domid.  Replace with a debug message.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Christopher Clark <christopher.w.clark@gmail.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/argo.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/argo.c b/xen/common/argo.c
index 748b8714d6..9ad2ecaa1e 100644
--- a/xen/common/argo.c
+++ b/xen/common/argo.c
@@ -1298,7 +1298,8 @@ partner_rings_remove(struct domain *src_d)
                     ASSERT_UNREACHABLE();
             }
             else
-                ASSERT_UNREACHABLE();
+                argo_dprintk("%pd has entry for stale partner d%u\n",
+                             src_d, send_info->id.domain_id);
 
             if ( dst_d )
                 rcu_unlock_domain(dst_d);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:37:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:37:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427869.677389 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1I-0001rU-3U; Fri, 21 Oct 2022 16:37:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427869.677389; Fri, 21 Oct 2022 16:37:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1I-0001rM-0l; Fri, 21 Oct 2022 16:37:16 +0000
Received: by outflank-mailman (input) for mailman id 427869;
 Fri, 21 Oct 2022 16:37:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1H-0001rA-02
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1G-00079l-Vd
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1G-0004pr-Up
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=UU0GsHnoD/0MksqDAlgyFHIeMiIDPmjbKprcDxPNDR4=; b=MrIUwv2xOe0VFNGIZnU7O75TzR
	lBEH/NJDnJpF8XjxsAO8tCaxKal6NxBUr3bQ7K+e9lb3mnvpi0Ra52DY2iSkDtaO+DcLtavbtdja4
	V+3JkaOQwFyRerBhrwWYzQj7vjOnVG+jjv1dadlde71Bs8sM6xkqckYdPGs6TDq8xPDw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/debugger/gdbsx: Fix and cleanup makefiles
Message-Id: <E1olv1G-0004pr-Up@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:37:14 +0000

commit 3a206abcd7f77bbbf0da24547e1d889c4d2789c7
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:04:57 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools/debugger/gdbsx: Fix and cleanup makefiles
    
    gdbsx/:
      - Make use of subdir facility for the "clean" target.
      - No need to remove the *.a, they aren't in this dir.
      - Avoid calling "distclean" in subdirs as "distclean" targets do only
        call "clean", and the "clean" also runs "clean" in subdirs.
      - Avoid the need to make "gx_all.a" and "xg_all.a" in the "all"
        recipe by forcing make to check for update of "xg/xg_all.a" and
        "gx/gx_all.a" by having "FORCE" as prerequisite. Now, when making
        "gdbsx", make will recurse even when both *.a already exist.
      - List target in $(TARGETS).
    
    gdbsx/*/:
      - Fix dependency on *.h.
      - Remove some dead code.
      - List targets in $(TARGETS).
      - Remove "build" target.
      - Cleanup "clean" targets.
      - remove comments about the choice of "ar" instead of "ld"
      - Use "$(AR)" instead of plain "ar".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/debugger/gdbsx/Makefile    | 20 ++++++++++----------
 tools/debugger/gdbsx/gx/Makefile | 15 +++++++--------
 tools/debugger/gdbsx/xg/Makefile | 25 +++++++------------------
 3 files changed, 24 insertions(+), 36 deletions(-)

diff --git a/tools/debugger/gdbsx/Makefile b/tools/debugger/gdbsx/Makefile
index 5571450a89..4aaf427c45 100644
--- a/tools/debugger/gdbsx/Makefile
+++ b/tools/debugger/gdbsx/Makefile
@@ -1,20 +1,20 @@
 XEN_ROOT = $(CURDIR)/../../..
 include ./Rules.mk
 
+SUBDIRS-y += gx
+SUBDIRS-y += xg
+
+TARGETS := gdbsx
+
 .PHONY: all
-all:
-	$(MAKE) -C gx
-	$(MAKE) -C xg
-	$(MAKE) gdbsx
+all: $(TARGETS)
 
 .PHONY: clean
-clean:
-	rm -f xg_all.a gx_all.a gdbsx
-	set -e; for d in xg gx; do $(MAKE) -C $$d clean; done
+clean: subdirs-clean
+	rm -f $(TARGETS)
 
 .PHONY: distclean
 distclean: clean
-	set -e; for d in xg gx; do $(MAKE) -C $$d distclean; done
 
 .PHONY: install
 install: all
@@ -28,7 +28,7 @@ uninstall:
 gdbsx: gx/gx_all.a xg/xg_all.a 
 	$(CC) $(LDFLAGS) -o $@ $^
 
-xg/xg_all.a:
+xg/xg_all.a: FORCE
 	$(MAKE) -C xg
-gx/gx_all.a:
+gx/gx_all.a: FORCE
 	$(MAKE) -C gx
diff --git a/tools/debugger/gdbsx/gx/Makefile b/tools/debugger/gdbsx/gx/Makefile
index 3b8467f799..e9859aea9c 100644
--- a/tools/debugger/gdbsx/gx/Makefile
+++ b/tools/debugger/gdbsx/gx/Makefile
@@ -2,21 +2,20 @@ XEN_ROOT = $(CURDIR)/../../../..
 include ../Rules.mk
 
 GX_OBJS := gx_comm.o gx_main.o gx_utils.o gx_local.o
-GX_HDRS := $(wildcard *.h)
+
+TARGETS := gx_all.a
 
 .PHONY: all
-all: gx_all.a
+all: $(TARGETS)
 
 .PHONY: clean
 clean:
-	rm -rf gx_all.a *.o .*.d
+	rm -f *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
 
-#%.o: %.c $(GX_HDRS) Makefile
-#	$(CC) -c $(CFLAGS) -o $@ $<
-
-gx_all.a: $(GX_OBJS) Makefile $(GX_HDRS)
-	ar cr $@ $(GX_OBJS)        # problem with ld using -m32 
+gx_all.a: $(GX_OBJS) Makefile
+	$(AR) cr $@ $(GX_OBJS)
 
+-include $(DEPS_INCLUDE)
diff --git a/tools/debugger/gdbsx/xg/Makefile b/tools/debugger/gdbsx/xg/Makefile
index acdcddf0d5..05325d6d81 100644
--- a/tools/debugger/gdbsx/xg/Makefile
+++ b/tools/debugger/gdbsx/xg/Makefile
@@ -1,35 +1,24 @@
 XEN_ROOT = $(CURDIR)/../../../..
 include ../Rules.mk
 
-XG_HDRS := xg_public.h 
 XG_OBJS := xg_main.o 
 
 CFLAGS += -D__XEN_TOOLS__
 CFLAGS += $(CFLAGS_xeninclude)
 
+TARGETS := xg_all.a
 
 .PHONY: all
-all: build
+all: $(TARGETS)
 
-.PHONY: build
-build: xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
-# build: mk-symlinks xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
-# build: mk-symlinks xg_all.a
-
-xg_all.a: $(XG_OBJS) Makefile $(XG_HDRS)
-	ar cr $@ $(XG_OBJS)    # problems using -m32 in ld 
-#	$(LD) -b elf32-i386 $(LDFLAGS) -r -o $@ $^
-#	$(CC) -m32 -c -o $@ $^
-
-# xg_main.o: xg_main.c Makefile $(XG_HDRS)
-#$(CC) -c $(CFLAGS) -o $@ $<
-
-# %.o: %.c $(XG_HDRS) Makefile  -- doesn't work as it won't overwrite Rules.mk
-#%.o: %.c       -- doesn't recompile when .c changed
+xg_all.a: $(XG_OBJS) Makefile
+	$(AR) cr $@ $(XG_OBJS)
 
 .PHONY: clean
 clean:
-	rm -rf xen xg_all.a $(XG_OBJS)  .*.d
+	rm -f $(TARGETS) $(XG_OBJS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
+
+-include $(DEPS_INCLUDE)
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:37:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:37:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427870.677393 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1S-0001un-6l; Fri, 21 Oct 2022 16:37:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427870.677393; Fri, 21 Oct 2022 16:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1S-0001ud-3x; Fri, 21 Oct 2022 16:37:26 +0000
Received: by outflank-mailman (input) for mailman id 427870;
 Fri, 21 Oct 2022 16:37:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1R-0001uW-32
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1R-00079s-2F
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1R-0004qO-1Y
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=TpewBac4ktleRLR3+gEaMQK66glUTVp2d0mT2ztUvjU=; b=KAihKlJ7MSkUqHCIzDSIr05TlN
	Gly/CnQTegFXELT7B5TPFPwEYiD/Fij8GgMkO8gvoHKvV+CgVpePJGR8sxtP0UC0907dYfsQSjft5
	ltaLTRoMN1nmyhT1gB/TkWr5N1sI/5lxQvoSVRngs+TqA0reAUhkfGmuPMWqSG3C2wiA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/xentrace: rework Makefile
Message-Id: <E1olv1R-0004qO-1Y@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:37:25 +0000

commit a2e8156ba49d699db3d2e36df21c8f57c832de77
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:04:58 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools/xentrace: rework Makefile
    
    Remove "build" targets.
    
    Use "$(TARGETS)" to list binary to be built.
    
    Cleanup "clean" rule.
    
    Also drop conditional install of $(BIN) and $(LIBBIN) as those two
    variables are now always populated.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/xentrace/Makefile | 27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

diff --git a/tools/xentrace/Makefile b/tools/xentrace/Makefile
index 9fb7fc96e7..63f2f6532d 100644
--- a/tools/xentrace/Makefile
+++ b/tools/xentrace/Makefile
@@ -9,41 +9,36 @@ LDLIBS += $(LDLIBS_libxenevtchn)
 LDLIBS += $(LDLIBS_libxenctrl)
 LDLIBS += $(ARGP_LDFLAGS)
 
-BIN      = xenalyze
-SBIN     = xentrace xentrace_setsize
-LIBBIN   = xenctx
-SCRIPTS  = xentrace_format
+BIN     := xenalyze
+SBIN    := xentrace xentrace_setsize
+LIBBIN  := xenctx
+SCRIPTS := xentrace_format
 
-.PHONY: all
-all: build
+TARGETS := $(BIN) $(SBIN) $(LIBBIN)
 
-.PHONY: build
-build: $(BIN) $(SBIN) $(LIBBIN)
+.PHONY: all
+all: $(TARGETS)
 
 .PHONY: install
-install: build
+install: all
 	$(INSTALL_DIR) $(DESTDIR)$(bindir)
 	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
-	[ -z "$(LIBBIN)" ] || $(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-ifneq ($(BIN),)
+	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
 	$(INSTALL_PROG) $(BIN) $(DESTDIR)$(bindir)
-endif
 	$(INSTALL_PROG) $(SBIN) $(DESTDIR)$(sbindir)
 	$(INSTALL_PYTHON_PROG) $(SCRIPTS) $(DESTDIR)$(bindir)
-	[ -z "$(LIBBIN)" ] || $(INSTALL_PROG) $(LIBBIN) $(DESTDIR)$(LIBEXEC_BIN)
+	$(INSTALL_PROG) $(LIBBIN) $(DESTDIR)$(LIBEXEC_BIN)
 
 .PHONY: uninstall
 uninstall:
 	rm -f $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/, $(LIBBIN))
 	rm -f $(addprefix $(DESTDIR)$(bindir)/, $(SCRIPTS))
 	rm -f $(addprefix $(DESTDIR)$(sbindir)/, $(SBIN))
-ifneq ($(BIN),)
 	rm -f $(addprefix $(DESTDIR)$(bindir)/, $(BIN))
-endif
 
 .PHONY: clean
 clean:
-	$(RM) *.a *.so *.o *.rpm $(BIN) $(SBIN) $(LIBBIN) $(DEPS_RM)
+	$(RM) *.o $(TARGETS) $(DEPS_RM)
 
 .PHONY: distclean
 distclean: clean
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:37:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:37:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427871.677397 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1c-0001xi-87; Fri, 21 Oct 2022 16:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427871.677397; Fri, 21 Oct 2022 16:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1c-0001xa-5S; Fri, 21 Oct 2022 16:37:36 +0000
Received: by outflank-mailman (input) for mailman id 427871;
 Fri, 21 Oct 2022 16:37:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1b-0001xR-67
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1b-0007A1-5N
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1b-0004qr-4d
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=W1UGvGUyu/p51C5nxFAW3BD08yNqxNC9/QZeUMkhvno=; b=Q4S4CmWda9+MpKeuEXFQbsweB5
	mg2E/ARXLRpEYDQyevSO06fPHSGafFPE/XmIFPdFLtaEP03DN3XxE6ZUKNWIHwkczw2NKyhNnOzcV
	guLdtcBmnNPNU0PQCdwE4XXVktV7v2fJUPkKRUo3ObRGN20VxOzoraZFBVpCkrTJ9igw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools: Introduce $(xenlibs-ldflags, ) macro
Message-Id: <E1olv1b-0004qr-4d@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:37:35 +0000

commit fcdb9cdb953d6c1f893286c3619e74f72e1327fc
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:04:59 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools: Introduce $(xenlibs-ldflags, ) macro
    
    This avoid the need to open-coding the list of flags needed to link
    with an in-tree Xen library when using -lxen*.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/Rules.mk                 | 8 ++++++++
 tools/golang/xenlight/Makefile | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index ce77dd2eb1..26958b2948 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -105,6 +105,14 @@ define xenlibs-ldlibs
     $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
 endef
 
+# Provide needed flags for linking an in-tree Xen library by an external
+# project (or when it is necessary to link with "-lxen$(1)" instead of using
+# the full path to the library).
+define xenlibs-ldflags
+    $(call xenlibs-rpath,$(1)) \
+    $(foreach lib,$(1),-L$(XEN_ROOT)/tools/libs/$(lib))
+endef
+
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 64671f246c..00e6d17f2b 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -27,7 +27,7 @@ GOXL_GEN_FILES = types.gen.go helpers.gen.go
 # so that it can find the actual library.
 .PHONY: build
 build: xenlight.go $(GOXL_GEN_FILES)
-	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog) $(APPEND_CFLAGS)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_libxenlight) -L$(XEN_libxentoollog) $(APPEND_LDFLAGS)" $(GO) build -x
+	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog) $(APPEND_CFLAGS)" CGO_LDFLAGS="$(call xenlibs-ldflags,light toollog) $(APPEND_LDFLAGS)" $(GO) build -x
 
 .PHONY: install
 install: build
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:37:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427872.677401 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1m-000206-9v; Fri, 21 Oct 2022 16:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427872.677401; Fri, 21 Oct 2022 16:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1m-0001zw-6x; Fri, 21 Oct 2022 16:37:46 +0000
Received: by outflank-mailman (input) for mailman id 427872;
 Fri, 21 Oct 2022 16:37:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1l-0001zl-9s
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1l-0007AL-94
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1l-0004rM-8S
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=xAVBbrXWcba8013Z38UIeLnkHK7+haWkWjEAESxQby8=; b=rL3/M9YIIdgcWp+ZedxgIc3tBu
	7bM5vRW1/2HNPqbdoCAoQ4GpLGevbwdx7sYAEXTz2Rio8fPmq0rBPfTzo+gd28Yg/Vzt2WnO6rWBE
	xMIivhd1usYrROyc3KxYCVu6ILP1dRIvAqxr+tgF626LJzxFHXtvKrtJcpnNi5OWKMPk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools: Add -Werror by default to all tools/
Message-Id: <E1olv1l-0004rM-8S@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:37:45 +0000

commit e4f5949c446635a854f06317b81db11cccfdabee
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:00 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:16:54 2022 +0100

    tools: Add -Werror by default to all tools/
    
    And provide an option to ./configure to disable it.
    
    A follow-up patch will remove -Werror from every other Makefile in
    tools/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 config/Tools.mk.in |  1 +
 tools/Rules.mk     |  4 ++++
 tools/configure    | 26 ++++++++++++++++++++++++++
 tools/configure.ac |  1 +
 4 files changed, 32 insertions(+)

diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 6c1a0a676f..d0d460f922 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -1,5 +1,6 @@
 -include $(XEN_ROOT)/config/Paths.mk
 
+CONFIG_WERROR       := @werror@
 CONFIG_RUMP         := @CONFIG_RUMP@
 ifeq ($(CONFIG_RUMP),y)
 XEN_OS              := NetBSDRump
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 26958b2948..a165dc4bda 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -133,6 +133,10 @@ endif
 
 CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
 
+ifeq ($(CONFIG_WERROR),y)
+CFLAGS += -Werror
+endif
+
 ifeq ($(debug),y)
 # Use -Og if available, -O0 otherwise
 dbg_opt_level := $(call cc-option,$(CC),-Og,-O0)
diff --git a/tools/configure b/tools/configure
index 41deb7fb96..acd9a04c3b 100755
--- a/tools/configure
+++ b/tools/configure
@@ -716,6 +716,7 @@ ocamltools
 monitors
 githttp
 rpath
+werror
 DEBUG_DIR
 XEN_DUMP_DIR
 XEN_PAGING_DIR
@@ -805,6 +806,7 @@ with_xen_scriptdir
 with_xen_dumpdir
 with_rundir
 with_debugdir
+enable_werror
 enable_rpath
 enable_githttp
 enable_monitors
@@ -1490,6 +1492,7 @@ Optional Features:
   --disable-FEATURE       do not include FEATURE (same as --enable-FEATURE=no)
   --enable-FEATURE[=ARG]  include FEATURE [ARG=yes]
   --disable-largefile     omit support for large files
+  --disable-werror        Build tools without -Werror (default is ENABLED)
   --enable-rpath          Build tools with -Wl,-rpath,LIBDIR (default is
                           DISABLED)
   --enable-githttp        Download GIT repositories via HTTP (default is
@@ -4111,6 +4114,29 @@ DEBUG_DIR=$debugdir_path
 
 # Enable/disable options
 
+# Check whether --enable-werror was given.
+if test "${enable_werror+set}" = set; then :
+  enableval=$enable_werror;
+fi
+
+
+if test "x$enable_werror" = "xno"; then :
+
+    ax_cv_werror="n"
+
+elif test "x$enable_werror" = "xyes"; then :
+
+    ax_cv_werror="y"
+
+elif test -z $ax_cv_werror; then :
+
+    ax_cv_werror="y"
+
+fi
+werror=$ax_cv_werror
+
+
+
 # Check whether --enable-rpath was given.
 if test "${enable_rpath+set}" = set; then :
   enableval=$enable_rpath;
diff --git a/tools/configure.ac b/tools/configure.ac
index 32cbe6bd3c..09059bc569 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -81,6 +81,7 @@ m4_include([../m4/header.m4])
 AX_XEN_EXPAND_CONFIG()
 
 # Enable/disable options
+AX_ARG_DEFAULT_ENABLE([werror], [Build tools without -Werror])
 AX_ARG_DEFAULT_DISABLE([rpath], [Build tools with -Wl,-rpath,LIBDIR])
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
 AX_ARG_DEFAULT_ENABLE([monitors], [Disable xenstat and xentop monitoring tools])
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:37:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427873.677405 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1w-00022i-BW; Fri, 21 Oct 2022 16:37:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427873.677405; Fri, 21 Oct 2022 16:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv1w-00022W-8S; Fri, 21 Oct 2022 16:37:56 +0000
Received: by outflank-mailman (input) for mailman id 427873;
 Fri, 21 Oct 2022 16:37:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1v-00022D-Cw
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1v-0007AP-CG
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv1v-0004rq-Bb
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:37:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=gatgHf03Xl0tki+lVJTskEdRfJU8DftlGoU3WnDVUgU=; b=sGqE/gX8k55n0PjigdS5aB5Yn1
	Kle2y+q3REA8gUSIY2TQY3cN/5EYP5J7+qv3Op1gX52vLHLEYL/mh7eF7f8JbyU/qSbp6PuPmEl7o
	Zgeab3k+p5tJFTnHWcVkzSYH1ljIvyjycC/5YQwp+nYbTKk/Q4xz70NN6PG4yJ9hplYg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools: Remove -Werror everywhere else
Message-Id: <E1olv1v-0004rq-Bb@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:37:55 +0000

commit 40d96f0c7d5399f9b824926279d41ead974fbe39
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:01 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 16:17:41 2022 +0100

    tools: Remove -Werror everywhere else
    
    The previous changeset, e4f5949c4466 ("tools: Add -Werror by default to all
    tools/"), added "-Werror" to CFLAGS in tools/Rules.mk.  Remove it from
    everywhere else now it is duplicated.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Daniel P. Smith <dpsmith@apertussolutions.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/console/client/Makefile   | 1 -
 tools/console/daemon/Makefile   | 1 -
 tools/debugger/gdbsx/Rules.mk   | 2 +-
 tools/debugger/kdd/Makefile     | 1 -
 tools/firmware/Rules.mk         | 2 --
 tools/flask/utils/Makefile      | 1 -
 tools/fuzz/cpu-policy/Makefile  | 2 +-
 tools/libfsimage/common.mk      | 2 +-
 tools/libs/libs.mk              | 2 +-
 tools/misc/Makefile             | 1 -
 tools/ocaml/common.make         | 2 +-
 tools/pygrub/setup.py           | 2 +-
 tools/python/setup.py           | 2 +-
 tools/tests/cpu-policy/Makefile | 2 +-
 tools/tests/depriv/Makefile     | 2 +-
 tools/tests/resource/Makefile   | 1 -
 tools/tests/tsx/Makefile        | 1 -
 tools/tests/xenstore/Makefile   | 1 -
 tools/xcutils/Makefile          | 2 --
 tools/xenmon/Makefile           | 1 -
 tools/xenpaging/Makefile        | 1 -
 tools/xenpmd/Makefile           | 1 -
 tools/xenstore/Makefile.common  | 1 -
 tools/xentop/Makefile           | 2 +-
 tools/xentrace/Makefile         | 2 --
 tools/xl/Makefile               | 2 +-
 26 files changed, 11 insertions(+), 29 deletions(-)

diff --git a/tools/console/client/Makefile b/tools/console/client/Makefile
index e2f2554f92..62d89fdeb9 100644
--- a/tools/console/client/Makefile
+++ b/tools/console/client/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/console/daemon/Makefile b/tools/console/daemon/Makefile
index 99bb33b6a2..9fc3b6711f 100644
--- a/tools/console/daemon/Makefile
+++ b/tools/console/daemon/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += $(CFLAGS_libxenevtchn)
diff --git a/tools/debugger/gdbsx/Rules.mk b/tools/debugger/gdbsx/Rules.mk
index 920f1c87fb..1f631b62da 100644
--- a/tools/debugger/gdbsx/Rules.mk
+++ b/tools/debugger/gdbsx/Rules.mk
@@ -1,6 +1,6 @@
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS   += -Werror -Wmissing-prototypes 
+CFLAGS   += -Wmissing-prototypes
 # (gcc 4.3x and later)   -Wconversion -Wno-sign-conversion
 
 CFLAGS-$(clang) += -Wno-ignored-attributes
diff --git a/tools/debugger/kdd/Makefile b/tools/debugger/kdd/Makefile
index 26116949d4..a72ad3b1e0 100644
--- a/tools/debugger/kdd/Makefile
+++ b/tools/debugger/kdd/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS  += -Werror
 CFLAGS  += $(CFLAGS_libxenctrl)
 CFLAGS  += -DXC_WANT_COMPAT_MAP_FOREIGN_API
 LDLIBS  += $(LDLIBS_libxenctrl)
diff --git a/tools/firmware/Rules.mk b/tools/firmware/Rules.mk
index 278cca01e4..d3482c9ec4 100644
--- a/tools/firmware/Rules.mk
+++ b/tools/firmware/Rules.mk
@@ -11,8 +11,6 @@ ifneq ($(debug),y)
 CFLAGS += -DNDEBUG
 endif
 
-CFLAGS += -Werror
-
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
 $(call cc-option-add,CFLAGS,CC,-fcf-protection=none)
diff --git a/tools/flask/utils/Makefile b/tools/flask/utils/Makefile
index 6be134142a..88d7edb6b1 100644
--- a/tools/flask/utils/Makefile
+++ b/tools/flask/utils/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenctrl)
 
 TARGETS := flask-loadpolicy flask-setenforce flask-getenforce flask-label-pci flask-get-bool flask-set-bool
diff --git a/tools/fuzz/cpu-policy/Makefile b/tools/fuzz/cpu-policy/Makefile
index 41a2230408..6e7743e0aa 100644
--- a/tools/fuzz/cpu-policy/Makefile
+++ b/tools/fuzz/cpu-policy/Makefile
@@ -17,7 +17,7 @@ install: all
 
 .PHONY: uninstall
 
-CFLAGS += -Werror $(CFLAGS_xeninclude) -D__XEN_TOOLS__
+CFLAGS += $(CFLAGS_xeninclude) -D__XEN_TOOLS__
 CFLAGS += $(APPEND_CFLAGS) -Og
 
 vpath %.c ../../../xen/lib/x86
diff --git a/tools/libfsimage/common.mk b/tools/libfsimage/common.mk
index 77bc957f27..4fc8c66795 100644
--- a/tools/libfsimage/common.mk
+++ b/tools/libfsimage/common.mk
@@ -2,7 +2,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 FSDIR := $(libdir)/xenfsimage
 CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/ -DFSIMAGE_FSDIR=\"$(FSDIR)\"
-CFLAGS += -Werror -D_GNU_SOURCE
+CFLAGS += -D_GNU_SOURCE
 LDFLAGS += -L../common/
 
 PIC_OBJS = $(patsubst %.c,%.opic,$(LIB_SRCS-y))
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 2b8e7a6128..e47fb30ed4 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -14,7 +14,7 @@ MINOR ?= 0
 
 SHLIB_LDFLAGS += -Wl,--version-script=libxen$(LIBNAME).map
 
-CFLAGS   += -Werror -Wmissing-prototypes
+CFLAGS   += -Wmissing-prototypes
 CFLAGS   += $(CFLAGS_xeninclude)
 CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
 
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 0e02401227..1c6e1d6a04 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += $(CFLAGS_libxenevtchn)
diff --git a/tools/ocaml/common.make b/tools/ocaml/common.make
index d5478f626f..0c8a597d5b 100644
--- a/tools/ocaml/common.make
+++ b/tools/ocaml/common.make
@@ -9,7 +9,7 @@ OCAMLLEX ?= ocamllex
 OCAMLYACC ?= ocamlyacc
 OCAMLFIND ?= ocamlfind
 
-CFLAGS += -fPIC -Werror -I$(shell ocamlc -where)
+CFLAGS += -fPIC -I$(shell ocamlc -where)
 
 OCAMLOPTFLAG_G := $(shell $(OCAMLOPT) -h 2>&1 | sed -n 's/^  *\(-g\) .*/\1/p')
 OCAMLOPTFLAGS = $(OCAMLOPTFLAG_G) -ccopt "$(LDFLAGS)" -dtypes $(OCAMLINCLUDE) -cc $(CC) -w F -warn-error F
diff --git a/tools/pygrub/setup.py b/tools/pygrub/setup.py
index b8f1dc4590..0e4e3d02d3 100644
--- a/tools/pygrub/setup.py
+++ b/tools/pygrub/setup.py
@@ -3,7 +3,7 @@ from distutils.ccompiler import new_compiler
 import os
 import sys
 
-extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
+extra_compile_args  = [ "-fno-strict-aliasing" ]
 
 XEN_ROOT = "../.."
 
diff --git a/tools/python/setup.py b/tools/python/setup.py
index 8c95db7769..721a3141d7 100644
--- a/tools/python/setup.py
+++ b/tools/python/setup.py
@@ -8,7 +8,7 @@ SHLIB_libxenctrl = os.environ['SHLIB_libxenctrl'].split()
 SHLIB_libxenguest = os.environ['SHLIB_libxenguest'].split()
 SHLIB_libxenstore = os.environ['SHLIB_libxenstore'].split()
 
-extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
+extra_compile_args  = [ "-fno-strict-aliasing" ]
 
 PATH_XEN      = XEN_ROOT + "/tools/include"
 PATH_LIBXENTOOLLOG = XEN_ROOT + "/tools/libs/toollog"
diff --git a/tools/tests/cpu-policy/Makefile b/tools/tests/cpu-policy/Makefile
index 93af9d76fa..c5b81afc71 100644
--- a/tools/tests/cpu-policy/Makefile
+++ b/tools/tests/cpu-policy/Makefile
@@ -36,7 +36,7 @@ install: all
 uninstall:
 	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror -D__XEN_TOOLS__
+CFLAGS += -D__XEN_TOOLS__
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(APPEND_CFLAGS)
 
diff --git a/tools/tests/depriv/Makefile b/tools/tests/depriv/Makefile
index 3cba28da25..7d9e3b01bb 100644
--- a/tools/tests/depriv/Makefile
+++ b/tools/tests/depriv/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror -Wno-declaration-after-statement
+CFLAGS += -Wno-declaration-after-statement
 
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
diff --git a/tools/tests/resource/Makefile b/tools/tests/resource/Makefile
index b3cd70c06d..a5856bf095 100644
--- a/tools/tests/resource/Makefile
+++ b/tools/tests/resource/Makefile
@@ -27,7 +27,6 @@ install: all
 uninstall:
 	$(RM) -- $(DESTDIR)$(LIBEXEC_BIN)/$(TARGET)
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
 CFLAGS += $(CFLAGS_libxenforeginmemory)
diff --git a/tools/tests/tsx/Makefile b/tools/tests/tsx/Makefile
index d7d2a5d95e..a4f516b725 100644
--- a/tools/tests/tsx/Makefile
+++ b/tools/tests/tsx/Makefile
@@ -26,7 +26,6 @@ uninstall:
 .PHONY: uninstall
 uninstall:
 
-CFLAGS += -Werror
 CFLAGS += -I$(XEN_ROOT)/tools/libs/ctrl -I$(XEN_ROOT)/tools/libs/guest
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenctrl)
diff --git a/tools/tests/xenstore/Makefile b/tools/tests/xenstore/Makefile
index 239e1dce47..202dda0d3c 100644
--- a/tools/tests/xenstore/Makefile
+++ b/tools/tests/xenstore/Makefile
@@ -27,7 +27,6 @@ install: all
 uninstall:
 	$(RM) -- $(addprefix $(DESTDIR)$(LIBEXEC_BIN)/,$(TARGETS))
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenstore)
 CFLAGS += $(APPEND_CFLAGS)
 
diff --git a/tools/xcutils/Makefile b/tools/xcutils/Makefile
index e40a2c4bfa..3687f6cd8f 100644
--- a/tools/xcutils/Makefile
+++ b/tools/xcutils/Makefile
@@ -13,8 +13,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 TARGETS := readnotes lsevtchn
 
-CFLAGS += -Werror
-
 CFLAGS_readnotes.o  := $(CFLAGS_libxenevtchn) $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest)
 CFLAGS_lsevtchn.o   := $(CFLAGS_libxenevtchn) $(CFLAGS_libxenctrl)
 
diff --git a/tools/xenmon/Makefile b/tools/xenmon/Makefile
index 3e150b0659..679c4b41a3 100644
--- a/tools/xenmon/Makefile
+++ b/tools/xenmon/Makefile
@@ -13,7 +13,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS  += -Werror
 CFLAGS  += $(CFLAGS_libxenevtchn)
 CFLAGS  += $(CFLAGS_libxenctrl)
 LDLIBS  += $(LDLIBS_libxenctrl)
diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index e2ed9eaa3f..835cf2b965 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -12,7 +12,6 @@ OBJS-y   += xenpaging.o
 OBJS-y   += policy_$(POLICY).o
 OBJS-y   += pagein.o
 
-CFLAGS   += -Werror
 CFLAGS   += -Wno-unused
 
 TARGETS := xenpaging
diff --git a/tools/xenpmd/Makefile b/tools/xenpmd/Makefile
index e0d3f06ab2..8da20510b5 100644
--- a/tools/xenpmd/Makefile
+++ b/tools/xenpmd/Makefile
@@ -1,7 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
 CFLAGS += $(CFLAGS_libxenstore)
 
 LDLIBS += $(LDLIBS_libxenstore)
diff --git a/tools/xenstore/Makefile.common b/tools/xenstore/Makefile.common
index 21b78b0538..ddbac052ac 100644
--- a/tools/xenstore/Makefile.common
+++ b/tools/xenstore/Makefile.common
@@ -9,7 +9,6 @@ XENSTORED_OBJS-$(CONFIG_NetBSD) += xenstored_posix.o
 XENSTORED_OBJS-$(CONFIG_FreeBSD) += xenstored_posix.o
 XENSTORED_OBJS-$(CONFIG_MiniOS) += xenstored_minios.o
 
-CFLAGS += -Werror
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += -I./include
diff --git a/tools/xentop/Makefile b/tools/xentop/Makefile
index 7bd96f34d5..70cc2211c5 100644
--- a/tools/xentop/Makefile
+++ b/tools/xentop/Makefile
@@ -13,7 +13,7 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -DGCC_PRINTF -Werror $(CFLAGS_libxenstat)
+CFLAGS += -DGCC_PRINTF $(CFLAGS_libxenstat)
 LDLIBS += $(LDLIBS_libxenstat) $(CURSES_LIBS) $(TINFO_LIBS) $(SOCKET_LIBS) -lm
 CFLAGS += -DHOST_$(XEN_OS)
 
diff --git a/tools/xentrace/Makefile b/tools/xentrace/Makefile
index 63f2f6532d..d50d400472 100644
--- a/tools/xentrace/Makefile
+++ b/tools/xentrace/Makefile
@@ -1,8 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror
-
 CFLAGS += $(CFLAGS_libxenevtchn)
 CFLAGS += $(CFLAGS_libxenctrl)
 LDLIBS += $(LDLIBS_libxenevtchn)
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index b7f439121a..5f7aa5f46c 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -5,7 +5,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += -Werror -Wno-format-zero-length -Wmissing-declarations \
+CFLAGS += -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
 CFLAGS += -fPIC
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:38:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427874.677409 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv26-00025q-DN; Fri, 21 Oct 2022 16:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427874.677409; Fri, 21 Oct 2022 16:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv26-00025i-A2; Fri, 21 Oct 2022 16:38:06 +0000
Received: by outflank-mailman (input) for mailman id 427874;
 Fri, 21 Oct 2022 16:38:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv25-00025X-H3
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv25-0007Ag-GP
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv25-0004sb-FX
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=NJPsDFmAuAuX00+YeD/6hXGvDLyiBmaxRvoBnW5AiZ4=; b=5bVfGhqivpk/EDcXSGeH1/RVrR
	McFYY2myj/BWgroEQLsDWh6SV6k6nKEKXrpLcq/petTj3H3udSSxjXT+Kwey3KWtNJiFn7y9uvtsS
	zg7sytHjEUDBWFYaLOUBqXw1CQT2019AuF4ztIpuKhh34wA+pDZPGztxdaqgr/TSmGCQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/hotplug: Generate "hotplugpath.sh" with configure
Message-Id: <E1olv25-0004sb-FX@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:38:05 +0000

commit f3fae4184fb2e90b715f7361f7bd4f37f400587f
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:02 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools/hotplug: Generate "hotplugpath.sh" with configure
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/configure                        |  3 ++-
 tools/configure.ac                     |  1 +
 tools/hotplug/common/Makefile          | 10 ++--------
 tools/hotplug/common/hotplugpath.sh.in | 16 ++++++++++++++++
 4 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/tools/configure b/tools/configure
index acd9a04c3b..6199823f5a 100755
--- a/tools/configure
+++ b/tools/configure
@@ -2456,7 +2456,7 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
 
 
 
-ac_config_files="$ac_config_files ../config/Tools.mk hotplug/FreeBSD/rc.d/xencommons hotplug/FreeBSD/rc.d/xendriverdomain hotplug/Linux/init.d/sysconfig.xencommons hotplug/Linux/init.d/sysconfig.xendomains hotplug/Linux/init.d/xen-watchdog hotplug/Linux/init.d/xencommons hotplug/Linux/init.d/xendomains hotplug/Linux/init.d/xendriverdomain hotplug/Linux/launch-xenstore hotplug/Linux/vif-setup hotplug/Linux/xen-hotplug-common.sh hotplug/Linux/xendomains hotplug/NetBSD/rc.d/xencommons hotplug/NetBSD/rc.d/xendriverdomain ocaml/libs/xs/paths.ml ocaml/xenstored/paths.ml ocaml/xenstored/oxenstored.conf"
+ac_config_files="$ac_config_files ../config/Tools.mk hotplug/common/hotplugpath.sh hotplug/FreeBSD/rc.d/xencommons hotplug/FreeBSD/rc.d/xendriverdomain hotplug/Linux/init.d/sysconfig.xencommons hotplug/Linux/init.d/sysconfig.xendomains hotplug/Linux/init.d/xen-watchdog hotplug/Linux/init.d/xencommons hotplug/Linux/init.d/xendomains hotplug/Linux/init.d/xendriverdomain hotplug/Linux/launch-xenstore hotplug/Linux/vif-setup hotplug/Linux/xen-hotplug-common.sh hotplug/Linux/xendomains hotplug/NetBSD/rc.d/xencommons hotplug/NetBSD/rc.d/xendriverdomain ocaml/libs/xs/paths.ml ocaml/xenstored/paths.ml ocaml/xenstored/oxenstored.conf"
 
 ac_config_headers="$ac_config_headers config.h"
 
@@ -10947,6 +10947,7 @@ for ac_config_target in $ac_config_targets
 do
   case $ac_config_target in
     "../config/Tools.mk") CONFIG_FILES="$CONFIG_FILES ../config/Tools.mk" ;;
+    "hotplug/common/hotplugpath.sh") CONFIG_FILES="$CONFIG_FILES hotplug/common/hotplugpath.sh" ;;
     "hotplug/FreeBSD/rc.d/xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/FreeBSD/rc.d/xencommons" ;;
     "hotplug/FreeBSD/rc.d/xendriverdomain") CONFIG_FILES="$CONFIG_FILES hotplug/FreeBSD/rc.d/xendriverdomain" ;;
     "hotplug/Linux/init.d/sysconfig.xencommons") CONFIG_FILES="$CONFIG_FILES hotplug/Linux/init.d/sysconfig.xencommons" ;;
diff --git a/tools/configure.ac b/tools/configure.ac
index 09059bc569..18e481d77e 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -7,6 +7,7 @@ AC_INIT([Xen Hypervisor Tools], m4_esyscmd([../version.sh ../xen/Makefile]),
 AC_CONFIG_SRCDIR([libs/light/libxl.c])
 AC_CONFIG_FILES([
 ../config/Tools.mk
+hotplug/common/hotplugpath.sh
 hotplug/FreeBSD/rc.d/xencommons
 hotplug/FreeBSD/rc.d/xendriverdomain
 hotplug/Linux/init.d/sysconfig.xencommons
diff --git a/tools/hotplug/common/Makefile b/tools/hotplug/common/Makefile
index e8a8dbea6c..62afe1019e 100644
--- a/tools/hotplug/common/Makefile
+++ b/tools/hotplug/common/Makefile
@@ -1,19 +1,14 @@
 XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-HOTPLUGPATH := hotplugpath.sh
-
 # OS-independent hotplug scripts go in this directory
 
 # Xen scripts to go there.
 XEN_SCRIPTS :=
-XEN_SCRIPT_DATA := $(HOTPLUGPATH)
-
-genpath-target = $(call buildmakevars2file,$(HOTPLUGPATH))
-$(eval $(genpath-target))
+XEN_SCRIPT_DATA := hotplugpath.sh
 
 .PHONY: all
-all: $(HOTPLUGPATH)
+all:
 
 .PHONY: install
 install: install-scripts
@@ -40,7 +35,6 @@ uninstall-scripts:
 
 .PHONY: clean
 clean:
-	rm -f $(HOTPLUGPATH)
 
 .PHONY: distclean
 distclean: clean
diff --git a/tools/hotplug/common/hotplugpath.sh.in b/tools/hotplug/common/hotplugpath.sh.in
new file mode 100644
index 0000000000..1036b884b8
--- /dev/null
+++ b/tools/hotplug/common/hotplugpath.sh.in
@@ -0,0 +1,16 @@
+sbindir="@sbindir@"
+bindir="@bindir@"
+LIBEXEC="@LIBEXEC@"
+LIBEXEC_BIN="@LIBEXEC_BIN@"
+libdir="@libdir@"
+SHAREDIR="@SHAREDIR@"
+XENFIRMWAREDIR="@XENFIRMWAREDIR@"
+XEN_CONFIG_DIR="@XEN_CONFIG_DIR@"
+XEN_SCRIPT_DIR="@XEN_SCRIPT_DIR@"
+XEN_LOCK_DIR="@XEN_LOCK_DIR@"
+XEN_RUN_DIR="@XEN_RUN_DIR@"
+XEN_PAGING_DIR="@XEN_PAGING_DIR@"
+XEN_DUMP_DIR="@XEN_DUMP_DIR@"
+XEN_LOG_DIR="@XEN_LOG_DIR@"
+XEN_LIB_DIR="@XEN_LIB_DIR@"
+XEN_RUN_STORED="@XEN_RUN_STORED@"
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:38:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427875.677412 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2G-00029F-GQ; Fri, 21 Oct 2022 16:38:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427875.677412; Fri, 21 Oct 2022 16:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2G-000298-Dp; Fri, 21 Oct 2022 16:38:16 +0000
Received: by outflank-mailman (input) for mailman id 427875;
 Fri, 21 Oct 2022 16:38:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2F-000292-LE
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2F-0007Am-Jk
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2F-0004t2-J6
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=8hxZ3d2MmQJtVCA0HYe83jilD9P/yysoUbYiDNzPD28=; b=h+tUQwbCkfjz6XNuoQWOBPSP3j
	KwbqlnxXJROufGUyhOLchAd9ir/gM8bJDqt9rRN+ZvKfr8on62b8T5dGh0MXeiN7mX9Fho5uChIu1
	nlKabYwErYdjdPxczfAR6YI4j7UCpolSkvnc1RJRP17WZh4Xdn4yfwWczjKrv5Gbn3DE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] libs/light/gentypes.py: allow to generate headers in subdirectory
Message-Id: <E1olv2F-0004t2-J6@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:38:15 +0000

commit 4c1a3cca790f0a11d3d803f0406845f46a50d177
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:03 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs/light/gentypes.py: allow to generate headers in subdirectory
    
    This doesn't matter yet but it will when for example the script will
    be run from tools/ to generate files tools/libs/light/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/gentypes.py | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/gentypes.py b/tools/libs/light/gentypes.py
index 9a45e45acc..3fe3873242 100644
--- a/tools/libs/light/gentypes.py
+++ b/tools/libs/light/gentypes.py
@@ -584,6 +584,9 @@ def libxl_C_enum_from_string(ty, str, e, indent = "    "):
         s = indent + s
     return s.replace("\n", "\n%s" % indent).rstrip(indent)
 
+def clean_header_define(header_path):
+    return header_path.split('/')[-1].upper().replace('.','_')
+
 
 if __name__ == '__main__':
     if len(sys.argv) != 6:
@@ -598,7 +601,7 @@ if __name__ == '__main__':
 
     f = open(header, "w")
 
-    header_define = header.upper().replace('.','_')
+    header_define = clean_header_define(header)
     f.write("""#ifndef %s
 #define %s
 
@@ -648,7 +651,7 @@ if __name__ == '__main__':
 
     f = open(header_json, "w")
 
-    header_json_define = header_json.upper().replace('.','_')
+    header_json_define = clean_header_define(header_json)
     f.write("""#ifndef %s
 #define %s
 
@@ -672,7 +675,7 @@ if __name__ == '__main__':
 
     f = open(header_private, "w")
 
-    header_private_define = header_private.upper().replace('.','_')
+    header_private_define = clean_header_define(header_private)
     f.write("""#ifndef %s
 #define %s
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:38:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:38:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427876.677416 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2Q-0002CL-Hu; Fri, 21 Oct 2022 16:38:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427876.677416; Fri, 21 Oct 2022 16:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2Q-0002CD-FM; Fri, 21 Oct 2022 16:38:26 +0000
Received: by outflank-mailman (input) for mailman id 427876;
 Fri, 21 Oct 2022 16:38:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2P-0002C3-Nx
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2P-0007Ar-NE
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2P-0004te-MR
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=QbvhO6oQPqHs8vwJ8j4DgG51o5DyCvF9jWaNjnFzIi4=; b=IqnqsmmT01O+Jceazb06DxdMPB
	CBYvG9NO+aKNyPUX+TmTokh49Pgi7EtBH+lER5ztYJYifeUnM4nMzNwuMkWUka0MSJdLj+RLamDBq
	W+Fo9ifwMeVuGc+0lBp7jMwD8bBI8yiqa7RHOPm7y3GSrT65lKsoznBwAstyxSiLXtU0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] git-checkout.sh: handle running git-checkout from a different directory
Message-Id: <E1olv2P-0004te-MR@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:38:25 +0000

commit 4834dd5521a36cec118ed84b7c09a509edaafa6b
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:04 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    git-checkout.sh: handle running git-checkout from a different directory
    
    "$DIR" might not be a full path and it might not have `pwd` as ".."
    directory. So use `cd -` to undo the first `cd` command.
    
    Also, use `basename` to make a symbolic link with a relative path.
    
    This doesn't matter yet but it will when for example the commands to
    clone OVMF is been run from tools/ rather than tools/firmware/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 scripts/git-checkout.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/git-checkout.sh b/scripts/git-checkout.sh
index 20ae31ff23..fd4425ac4e 100755
--- a/scripts/git-checkout.sh
+++ b/scripts/git-checkout.sh
@@ -19,9 +19,9 @@ if test \! -d $DIR-remote; then
 		cd $DIR-remote.tmp
 		$GIT branch -D dummy >/dev/null 2>&1 ||:
 		$GIT checkout -b dummy $TAG
-		cd ..
+		cd -
 	fi
 	mv $DIR-remote.tmp $DIR-remote
 fi
 rm -f $DIR
-ln -sf $DIR-remote $DIR
+ln -sf $(basename $DIR-remote) $DIR
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:38:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:38:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427877.677422 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2b-0002F5-K0; Fri, 21 Oct 2022 16:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427877.677422; Fri, 21 Oct 2022 16:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2b-0002Ex-Go; Fri, 21 Oct 2022 16:38:37 +0000
Received: by outflank-mailman (input) for mailman id 427877;
 Fri, 21 Oct 2022 16:38:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2Z-0002En-RA
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2Z-0007Ax-QU
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2Z-0004uH-PN
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=j8eF1tc2PvR1eE/nTy+K5fWyrS4/KSo6RBE+Len/7bA=; b=Z9NeGuOwi2N/BzCTJZP38LhwA7
	Q1j6zIixHTrOUV9bSHxxO5cQzi1XbqFTg6AoVyXuubYconEyLFD8H4JCbr/FX8RQA6x0WyjuAxgFH
	kqo5pLPWV0eJPZseFeeCbeYl0uK4icOkKnQ+t45Xee5ynKdhz38b/kROFXXrtWSuMDb0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] libs: Avoid exposing -Wl,--version-script to other built library
Message-Id: <E1olv2Z-0004uH-PN@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:38:35 +0000

commit 13c05b9efa2b825935ff9215575b53c1f9ad7965
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:05 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs: Avoid exposing -Wl,--version-script to other built library
    
    $(SHLIB_LDFLAGS) is used by more targets that the single targets that
    except it (libxenfoo.so.X.Y). There is also some dynamic libraries in
    stats/ that uses $(SHLIB_LDFLAGS) (even if those are never built), and
    there's libxenlight_test.so which doesn't needs a version script.
    
    Also, libxenlight_test.so might failed to build if the version script
    doesn't exist yet.
    
    For these reasons, avoid changing the generic $(SHLIB_LDFLAGS) flags,
    and add the flag directly on the command line.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/libs.mk | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index e47fb30ed4..3eb91fc8f3 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -12,8 +12,6 @@ MAJOR := $(shell $(XEN_ROOT)/version.sh $(XEN_ROOT)/xen/Makefile)
 endif
 MINOR ?= 0
 
-SHLIB_LDFLAGS += -Wl,--version-script=libxen$(LIBNAME).map
-
 CFLAGS   += -Wmissing-prototypes
 CFLAGS   += $(CFLAGS_xeninclude)
 CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
@@ -85,7 +83,7 @@ lib$(LIB_FILE_NAME).so.$(MAJOR): lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)
 	$(SYMLINK_SHLIB) $< $@
 
 lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR): $(PIC_OBJS) libxen$(LIBNAME).map
-	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,lib$(LIB_FILE_NAME).so.$(MAJOR) -Wl,--version-script=libxen$(LIBNAME).map $(SHLIB_LDFLAGS) -o $@ $(PIC_OBJS) $(LDLIBS) $(APPEND_LDFLAGS)
 
 # If abi-dumper is available, write out the ABI analysis
 ifneq ($(ABI_DUMPER),)
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:38:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427878.677426 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2l-0002Ho-LO; Fri, 21 Oct 2022 16:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427878.677426; Fri, 21 Oct 2022 16:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2l-0002Hg-IG; Fri, 21 Oct 2022 16:38:47 +0000
Received: by outflank-mailman (input) for mailman id 427878;
 Fri, 21 Oct 2022 16:38:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2j-0002HS-UG
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2j-0007BQ-TU
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2j-0004v8-Se
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=FlqW9+tUPVqJq49bZ+6/XioZwb00NmxKv8yviFT4s/A=; b=hAqEcI5NRHjyW+qumQvGwYP8ik
	vw5pMERy0UqIcNYyt9o0wmv6puKsIPvtdQ0z/p68+bU2fP0DS/VPaeq0KpSd5VHk/4JmLUD4aYegA
	09kR6leguMFkKIWzdKJ2Uk6V4B3QzP+xLbrn3YM2RhsjmA79MB8ESbtDr2qmCPz5ESJk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/include: Rework Makefile
Message-Id: <E1olv2j-0004v8-Se@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:38:45 +0000

commit 6aabee32b572216ecb7292d26f99e1a3b49b6524
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:07 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools/include: Rework Makefile
    
    Rework "xen-xsm" rules to not have to change directory to run
    mkflask.sh, and store mkflask.sh path in a var, and use a full path
    for FLASK_H_DEPEND, and output directory is made relative.
    
    Rename "all-y" target to a more descriptive "xen/lib/x86/all".
    
    Removed the "dist" target which was the only one existing in tools/.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/include/Makefile | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/tools/include/Makefile b/tools/include/Makefile
index b488f7ca9f..81c3d09039 100644
--- a/tools/include/Makefile
+++ b/tools/include/Makefile
@@ -7,17 +7,20 @@ include $(XEN_ROOT)/tools/Rules.mk
 # taken into account, i.e. there should be no rules added here for generating
 # any tools/include/*.h files.
 
-# Relative to $(XEN_ROOT)/xen/xsm/flask
-FLASK_H_DEPEND := policy/initial_sids
+.PHONY: all
+all: xen-foreign xen-dir xen-xsm/.dir
+ifeq ($(CONFIG_X86),y)
+all: xen/lib/x86/all
+endif
 
-.PHONY: all all-y build xen-dir
-all build: all-y xen-foreign xen-dir xen-xsm/.dir
-all-y:
+.PHONY: build
+build: all
 
 .PHONY: xen-foreign
 xen-foreign:
 	$(MAKE) -C xen-foreign
 
+.PHONY: xen-dir
 xen-dir:
 	mkdir -p xen/libelf acpi
 	find xen/ acpi/ -type l -exec rm '{}' +
@@ -36,16 +39,18 @@ ifeq ($(CONFIG_X86),y)
 	ln -s $(XEN_ROOT)/xen/include/xen/lib/x86/Makefile xen/lib/x86/
 endif
 
-all-$(CONFIG_X86): xen-dir
+.PHONY: xen/lib/x86/all
+xen/lib/x86/all: xen-dir
 	$(MAKE) -C xen/lib/x86 all XEN_ROOT=$(XEN_ROOT) PYTHON=$(PYTHON)
 
+MKFLASK := $(XEN_ROOT)/xen/xsm/flask/policy/mkflask.sh
+FLASK_H_DEPEND := $(XEN_ROOT)/xen/xsm/flask/policy/initial_sids
+
 # Not xen/xsm as that clashes with link to
 # $(XEN_ROOT)/xen/include/public/xsm above.
-xen-xsm/.dir: $(XEN_ROOT)/xen/xsm/flask/policy/mkflask.sh \
-	      $(patsubst %,$(XEN_ROOT)/xen/xsm/flask/%,$(FLASK_H_DEPEND))
+xen-xsm/.dir: $(MKFLASK) $(FLASK_H_DEPEND)
 	mkdir -p xen-xsm/flask
-	cd $(XEN_ROOT)/xen/xsm/flask/ && \
-		$(SHELL) policy/mkflask.sh $(AWK) $(CURDIR)/xen-xsm/flask $(FLASK_H_DEPEND)
+	$(SHELL) $(MKFLASK) $(AWK) xen-xsm/flask $(FLASK_H_DEPEND)
 	touch $@
 
 .PHONY: install
@@ -84,8 +89,5 @@ clean:
 	$(MAKE) -C xen-foreign clean
 	rm -f _*.h
 
-.PHONY: dist
-dist: install
-
 .PHONY: distclean
 distclean: clean
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:38:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:38:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427879.677428 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2v-0002L6-ME; Fri, 21 Oct 2022 16:38:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427879.677428; Fri, 21 Oct 2022 16:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv2v-0002Ky-Jg; Fri, 21 Oct 2022 16:38:57 +0000
Received: by outflank-mailman (input) for mailman id 427879;
 Fri, 21 Oct 2022 16:38:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2u-0002KT-25
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2u-0007BU-1H
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:56 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv2u-0004xb-0B
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:38:56 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rUgwdmlkyqMEdmcybZVj2aqjUrQ8d7syV1Qy/yaupFY=; b=YRKfoX3Pgzmy6pm8gGKWMPHQQX
	1KXqesfiyDCAiqtbTur9WbGevLWNtFKO3EBfdYEXHAUWd+kaBN0i/suDsSDjieoj0wvz88t+VWqd2
	Qd3hEq1UJWXpTXxSdZmuY7XlUKrPsmx1TrNE+dPxmytuLaGNt0EU6U6+L6t4daIwOLIw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] libs/light: Rework acpi table build targets
Message-Id: <E1olv2u-0004xb-0B@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:38:56 +0000

commit 9eb46d3f9808417ee84a38778d808d34058fb546
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:08 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs/light: Rework acpi table build targets
    
    Currently, a rebuild of libxl will always rebuild "build.o". This is because
    the target depends on "acpi" which never exist. So instead we will have
    "build.o" have as prerequisites targets that are actually generated by "acpi",
    that is $(DSDT_FILES-y).
    
    While "dsdt_*.c" isn't really a dependency for "build.o", a side
    effect of building that dsdt_*.c is to also generate the "ssdt_*.h"
    that "build.o" needs, but I don't want to list all the headers needed
    by "build.o" and duplicate the information available in
    "libacpi/Makefile" at this time.
    
    Also avoid duplicating the "acpi" target for Arm, and unique one for
    both architecture. And move the "acpi" target to be with other targets
    rather than in the middle of the source listing. For the same reason,
    move the prerequisites listing for both $(DSDT_FILES-y) and "build.o".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/Makefile | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 13545654c2..d84e5f3cd9 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -32,14 +32,10 @@ ACPI_PATH  = $(XEN_ROOT)/tools/libacpi
 DSDT_FILES-$(CONFIG_X86) = dsdt_pvh.c
 ACPI_OBJS  = $(patsubst %.c,%.o,$(DSDT_FILES-y)) build.o static_tables.o
 ACPI_PIC_OBJS = $(patsubst %.o,%.opic,$(ACPI_OBJS))
-$(DSDT_FILES-y) build.o build.opic: acpi
+
 vpath build.c $(ACPI_PATH)/
 vpath static_tables.c $(ACPI_PATH)/
 
-.PHONY: acpi
-acpi:
-	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) DSDT_FILES="$(DSDT_FILES-y)"
-
 OBJS-$(CONFIG_X86) += $(ACPI_OBJS)
 
 CFLAGS += -Wno-format-zero-length -Wmissing-declarations \
@@ -58,8 +54,6 @@ ifeq ($(CONFIG_ARM_64),y)
 DSDT_FILES-y = dsdt_anycpu_arm.c
 OBJS-y += libxl_arm_acpi.o
 OBJS-y += $(DSDT_FILES-y:.c=.o)
-dsdt_anycpu_arm.c:
-	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) DSDT_FILES="$(DSDT_FILES-y)"
 else
 OBJS-$(CONFIG_ARM) += libxl_arm_no_acpi.o
 endif
@@ -191,6 +185,12 @@ all: $(CLIENTS) $(TEST_PROGS) $(AUTOSRCS) $(AUTOINCS)
 
 $(OBJS-y) $(PIC_OBJS) $(SAVE_HELPER_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): $(AUTOINCS) libxl.api-ok
 
+$(DSDT_FILES-y): acpi
+
+# Depend on the source files generated by the "acpi" target even though
+# "build.o" don't needs them.  It does need the generated headers.
+build.o build.opic: $(DSDT_FILES-y)
+
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
 	touch $@
@@ -227,6 +227,10 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 $(XEN_INCLUDE)/_%.h: _%.h
 	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)
 
+.PHONY: acpi
+acpi:
+	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) DSDT_FILES="$(DSDT_FILES-y)"
+
 libxenlight_test.so: $(PIC_OBJS) $(LIBXL_TEST_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDLIBS) $(APPEND_LDFLAGS)
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:39:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427880.677433 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv35-0002Nm-O4; Fri, 21 Oct 2022 16:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427880.677433; Fri, 21 Oct 2022 16:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv35-0002Ne-LC; Fri, 21 Oct 2022 16:39:07 +0000
Received: by outflank-mailman (input) for mailman id 427880;
 Fri, 21 Oct 2022 16:39:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv34-0002NU-5D
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv34-0007Bq-4P
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:06 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv34-0004yY-3f
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:06 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Rg1TBAVoiwIucyb0E7SFruXfPSSn1tD2N3294p09IP4=; b=y9gp9XLrZEUjSs3zgK7p5oJZ3L
	tjLxHDDjg39q6Cjq4ZzKpb7e/bGaaZNieqGkM6nDaT6hvNBb8ypdtrAyvYiDDg/Jj1ALgn1j8B3Ze
	iVm5n2vpNCVmsLBB5g0cjJkX7dbMiTfC9kmT5v0xxUAE7rTT5Gatc6RYwzd3J0UVaII4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] libs/light: Rework generation of include/_libxl_*.h
Message-Id: <E1olv34-0004yY-3f@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:39:06 +0000

commit 68d19cfb90a5bb6257e03be3f21c912bac7ec49b
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:09 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    libs/light: Rework generation of include/_libxl_*.h
    
    Instead of moving the public "_libxl_*.h" headers, we make a copy to
    the destination so that make doesn't try to remake the targets
    "_libxl_*.h" in libs/light/ again.
    
    A new .PRECIOUS target is added to tell make to not deletes the
    intermediate targets generated by "gentypes.py".
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/Makefile | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index d84e5f3cd9..d681269229 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -215,6 +215,8 @@ libxl_internal_json.h: _libxl_types_internal_json.h
 $(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): $(XEN_INCLUDE)/libxl.h
 $(OBJS-y) $(PIC_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
+# This exploits the 'multi-target pattern rule' trick.
+# gentypes.py should be executed only once to make all the targets.
 _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
 	$(eval stem = $(notdir $*))
 	$(PYTHON) gentypes.py libxl_type$(stem).idl __libxl_type$(stem).h __libxl_type$(stem)_private.h \
@@ -224,8 +226,10 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 	$(call move-if-changed,__libxl_type$(stem)_json.h,_libxl_type$(stem)_json.h)
 	$(call move-if-changed,__libxl_type$(stem).c,_libxl_type$(stem).c)
 
-$(XEN_INCLUDE)/_%.h: _%.h
-	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)
+.PRECIOUS: _libxl_type%.h _libxl_type%.c
+
+$(XEN_INCLUDE)/_libxl_%.h: _libxl_%.h
+	cp -f $< $@
 
 .PHONY: acpi
 acpi:
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:39:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427881.677436 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3F-0002QZ-Pe; Fri, 21 Oct 2022 16:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427881.677436; Fri, 21 Oct 2022 16:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3F-0002QR-Ms; Fri, 21 Oct 2022 16:39:17 +0000
Received: by outflank-mailman (input) for mailman id 427881;
 Fri, 21 Oct 2022 16:39:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3E-0002QG-8I
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3E-0007Bu-7d
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:16 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3E-0004zF-6f
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:16 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=OFxbvDtpqrLNehdjZt4XW6rh4aAYvMH3NIRsZI95IQg=; b=b4BItk0nnudWxwKMxG047kdxYH
	sxq22EdZqb3UXor6YlLaqhOfnWPMsCLx+dgQubn4madXoOpEfB7gXBDKCArpAJJ8mx5Mj1MPiT/3X
	/TdAQ7KIv8U8s2almNwXJtFhOZqt/icMiOJmXYHCD1xXirYBac2lfHLvWRblcfHIvvpU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/golang/xenlight: Rework gengotypes.py and generation of *.gen.go
Message-Id: <E1olv3E-0004zF-6f@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:39:16 +0000

commit 3f9d53af25dc7f0a9b05e3497822f1eeb47589d9
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:12 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools/golang/xenlight: Rework gengotypes.py and generation of *.gen.go
    
    gengotypes.py creates both "types.gen.go" and "helpers.gen.go", but
    make can start gengotypes.py twice. Rework the rules so that
    gengotypes.py is executed only once.
    
    Also, add the ability to provide a path to tell gengotypes.py where to
    put the files. This doesn't matter yet but it will when for example
    the script will be run from tools/ to generate the targets.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/golang/xenlight/Makefile      |  6 ++++--
 tools/golang/xenlight/gengotypes.py | 12 +++++++++++-
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 00e6d17f2b..c5bb6b94a8 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -15,8 +15,10 @@ all: build
 
 GOXL_GEN_FILES = types.gen.go helpers.gen.go
 
-%.gen.go: gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
-	LIBXL_SRC_DIR=$(LIBXL_SRC_DIR) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl
+# This exploits the 'multi-target pattern rule' trick.
+# gentypes.py should be executed only once to make all the targets.
+$(subst .gen.,.%.,$(GOXL_GEN_FILES)): gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
+	LIBXL_SRC_DIR=$(LIBXL_SRC_DIR) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(@D)/types.gen.go $(@D)/helpers.gen.go
 
 # Go will do its own dependency checking, and not actuall go through
 # with the build if none of the input files have changed.
diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index ac1cf060dd..9fec60602d 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -1,5 +1,7 @@
 #!/usr/bin/python
 
+from __future__ import print_function
+
 import os
 import sys
 
@@ -723,7 +725,13 @@ def xenlight_golang_fmt_name(name, exported = True):
     return words[0] + ''.join(x.title() for x in words[1:])
 
 if __name__ == '__main__':
+    if len(sys.argv) != 4:
+        print("Usage: gengotypes.py <idl> <types.gen.go> <helpers.gen.go>", file=sys.stderr)
+        sys.exit(1)
+
     idlname = sys.argv[1]
+    path_types = sys.argv[2]
+    path_helpers = sys.argv[3]
 
     (builtins, types) = idl.parse(idlname)
 
@@ -735,9 +743,11 @@ if __name__ == '__main__':
 // source: {}
 
 """.format(os.path.basename(sys.argv[0]),
-           ' '.join([os.path.basename(a) for a in sys.argv[1:]]))
+           os.path.basename(sys.argv[1]))
 
     xenlight_golang_generate_types(types=types,
+                                   path=path_types,
                                    comment=header_comment)
     xenlight_golang_generate_helpers(types=types,
+                                     path=path_helpers,
                                      comment=header_comment)
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:39:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427882.677441 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3P-0002Tp-TJ; Fri, 21 Oct 2022 16:39:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427882.677441; Fri, 21 Oct 2022 16:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3P-0002Ti-QQ; Fri, 21 Oct 2022 16:39:27 +0000
Received: by outflank-mailman (input) for mailman id 427882;
 Fri, 21 Oct 2022 16:39:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3O-0002TU-BJ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3O-0007Bz-Ae
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:26 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3O-000506-9o
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:26 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=5ffI4hINdRsvp2s/VDVcZ5CdMVKA4Ewg7BlcZoFBMH8=; b=tzNdHR2ORh6XLTwzKgtmlx5CFG
	K++RfaFIkNYFGDqYxgrijxLS2OYQdTiCaX84MTwThPmzFSY5vPLf7v3mjV5cOdyYnNkbK/63+OYMJ
	q9NM8Cs9+Gpf7vjA96BLAUma8O0hERUwfNezExemosMTE4iNFsQ9/fYK+ChSCAhUUl60=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools: Rework linking options for ocaml binding libraries
Message-Id: <E1olv3O-000506-9o@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:39:26 +0000

commit 5310a3aa5026fb27d6834306d920d6207a1e0898
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Thu Oct 13 14:05:13 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 14 20:56:57 2022 +0100

    tools: Rework linking options for ocaml binding libraries
    
    Using a full path to the C libraries when preparing one of the ocaml
    binding for those libraries make the binding unusable by external
    project. The full path is somehow embedded and reused by the external
    project when linking against the binding.
    
    Instead, we will use the proper way to link a library, by using '-l'.
    For in-tree build, we also need to provide the search directory via
    '-L'.
    
    (The search path -L are still be embedded, but at least that doesn't
    prevent the ocaml binding from been used.)
    
    Related-to: xen-project/xen#96
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/Rules.mk                       | 8 ++++++++
 tools/ocaml/libs/eventchn/Makefile   | 2 +-
 tools/ocaml/libs/xc/Makefile         | 2 +-
 tools/ocaml/libs/xentoollog/Makefile | 2 +-
 tools/ocaml/libs/xl/Makefile         | 2 +-
 5 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index a165dc4bda..34d495fff7 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -113,6 +113,14 @@ define xenlibs-ldflags
     $(foreach lib,$(1),-L$(XEN_ROOT)/tools/libs/$(lib))
 endef
 
+# Flags for linking against all Xen libraries listed in $(1) but by making use
+# of -L and -l instead of providing a path to the shared library.
+define xenlibs-ldflags-ldlibs
+    $(call xenlibs-ldflags,$(1)) \
+    $(foreach lib,$(1), -l$(FILENAME_$(lib))) \
+    $(foreach lib,$(1),$(xenlibs-ldlibs-$(lib)))
+endef
+
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
diff --git a/tools/ocaml/libs/eventchn/Makefile b/tools/ocaml/libs/eventchn/Makefile
index 7362a28d9e..dc560ba49b 100644
--- a/tools/ocaml/libs/eventchn/Makefile
+++ b/tools/ocaml/libs/eventchn/Makefile
@@ -8,7 +8,7 @@ OBJS = xeneventchn
 INTF = $(foreach obj, $(OBJS),$(obj).cmi)
 LIBS = xeneventchn.cma xeneventchn.cmxa
 
-LIBS_xeneventchn = $(LDLIBS_libxenevtchn)
+LIBS_xeneventchn = $(call xenlibs-ldflags-ldlibs,evtchn)
 
 all: $(INTF) $(LIBS) $(PROGRAMS)
 
diff --git a/tools/ocaml/libs/xc/Makefile b/tools/ocaml/libs/xc/Makefile
index 67acc46bee..3b76e9ad7b 100644
--- a/tools/ocaml/libs/xc/Makefile
+++ b/tools/ocaml/libs/xc/Makefile
@@ -10,7 +10,7 @@ OBJS = xenctrl
 INTF = xenctrl.cmi
 LIBS = xenctrl.cma xenctrl.cmxa
 
-LIBS_xenctrl = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
+LIBS_xenctrl = $(call xenlibs-ldflags-ldlibs,ctrl guest)
 
 xenctrl_OBJS = $(OBJS)
 xenctrl_C_OBJS = xenctrl_stubs
diff --git a/tools/ocaml/libs/xentoollog/Makefile b/tools/ocaml/libs/xentoollog/Makefile
index 9ede2fd124..1645b40faf 100644
--- a/tools/ocaml/libs/xentoollog/Makefile
+++ b/tools/ocaml/libs/xentoollog/Makefile
@@ -13,7 +13,7 @@ OBJS = xentoollog
 INTF = xentoollog.cmi
 LIBS = xentoollog.cma xentoollog.cmxa
 
-LIBS_xentoollog = $(LDLIBS_libxentoollog)
+LIBS_xentoollog = $(call xenlibs-ldflags-ldlibs,toollog)
 
 xentoollog_OBJS = $(OBJS)
 xentoollog_C_OBJS = xentoollog_stubs
diff --git a/tools/ocaml/libs/xl/Makefile b/tools/ocaml/libs/xl/Makefile
index 7c1c4edced..22d6c93aae 100644
--- a/tools/ocaml/libs/xl/Makefile
+++ b/tools/ocaml/libs/xl/Makefile
@@ -15,7 +15,7 @@ LIBS = xenlight.cma xenlight.cmxa
 
 OCAMLINCLUDE += -I ../xentoollog
 
-LIBS_xenlight = $(LDLIBS_libxenlight)
+LIBS_xenlight = $(call xenlibs-ldflags-ldlibs,light)
 
 xenlight_OBJS = $(OBJS)
 xenlight_C_OBJS = xenlight_stubs
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:39:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427883.677445 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3Z-0002Wa-Um; Fri, 21 Oct 2022 16:39:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427883.677445; Fri, 21 Oct 2022 16:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3Z-0002WT-Rw; Fri, 21 Oct 2022 16:39:37 +0000
Received: by outflank-mailman (input) for mailman id 427883;
 Fri, 21 Oct 2022 16:39:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3Y-0002W9-EM
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3Y-0007C3-Dh
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:36 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3Y-00050r-Ck
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:36 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=GqyBGp2V8K6bKWm090YJ/YX80hdxKxFOYRcWTg42ets=; b=ZUZbDyF7dbq/18XB7CWyyWCZrL
	nQ2xN/FUOwQshGb/0tRNdIK08DRqESqNdHSVLppKXvCaibeRUAbp/FkYOFaZI1Dplf67TgDP3YeRW
	p5Q2+DWuhUIRHDE25u6NDub77qNPd4x0C5GGFAvUGM/Pq+Wcd1jk6ygfv1B1FmPmWsO8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools: Workaround wrong use of tools/Rules.mk by qemu-trad
Message-Id: <E1olv3Y-00050r-Ck@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:39:36 +0000

commit cc4747be8ba157a3b310921e9ee07fb8545aa206
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Mon Oct 17 11:34:03 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Mon Oct 17 14:57:34 2022 +0100

    tools: Workaround wrong use of tools/Rules.mk by qemu-trad
    
    qemu-trad build system, when built from xen.git, will make use of
    Rules.mk (setup via qemu-trad.git/xen-setup). This mean that changes
    to Rules.mk will have an impact our ability to build qemu-trad.
    
    Recent commit e4f5949c4466 ("tools: Add -Werror by default to all
    tools/") have added "-Werror" to the CFLAGS and qemu-trad start to use
    it. But this fails and there's lots of warning that are now turned
    into error.
    
    We should teach qemu-trad and xen.git to not have to use Rules.mk when
    building qemu-trad, but for now, avoid adding -Werror to CFLAGS when
    building qemu-trad.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/Makefile | 1 +
 tools/Rules.mk | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/tools/Makefile b/tools/Makefile
index 0c1d8b64a4..9e28027835 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -159,6 +159,7 @@ qemu-traditional-recurse = \
 	set -e; \
 		$(buildmakevars2shellvars); \
 		export CONFIG_BLKTAP1=n; \
+		export BUILDING_QEMU_TRAD=y; \
 		cd qemu-xen-traditional-dir; \
 		$(1)
 
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 34d495fff7..6e135387bd 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -141,9 +141,12 @@ endif
 
 CFLAGS_libxenlight += $(CFLAGS_libxenctrl)
 
+# Don't add -Werror if we are used by qemu-trad build system.
+ifndef BUILDING_QEMU_TRAD
 ifeq ($(CONFIG_WERROR),y)
 CFLAGS += -Werror
 endif
+endif
 
 ifeq ($(debug),y)
 # Use -Og if available, -O0 otherwise
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:39:48 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427884.677449 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3k-0002ZR-0I; Fri, 21 Oct 2022 16:39:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427884.677449; Fri, 21 Oct 2022 16:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3j-0002ZJ-Tc; Fri, 21 Oct 2022 16:39:47 +0000
Received: by outflank-mailman (input) for mailman id 427884;
 Fri, 21 Oct 2022 16:39:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3i-0002Z6-HV
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3i-0007CU-Gl
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:46 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3i-00051u-Fz
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:46 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=SxrHsFtjyYDI4IY9kk2bgitfRu1rhIGKfk5AvMxmcPM=; b=TAr1t9dq6SpY84irbfgQ9Oqep5
	r/NreYDTNaFPTIVJnxOQdR+ef8pl05vgWeetwq+i4AoYlX52uNEjWNDQfhXtNQ7VcHDpmRED2zqH0
	DQz4gVMVseGYMWze5opvdY9exLukxmOfZb6pjUhaw9tQNJsSmIu0J8z/vcWc6hlX7hvQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] arm/p2m: Rework p2m_init()
Message-Id: <E1olv3i-00051u-Fz@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:39:46 +0000

commit 3783e583319fa1ce75e414d851f0fde191a14753
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 18 14:23:45 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Thu Oct 20 09:39:56 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f17500ddf3..6826f63150 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1754,7 +1754,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1763,11 +1763,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1783,8 +1778,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1797,13 +1790,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:39:58 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427885.677453 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3u-0002cK-1f; Fri, 21 Oct 2022 16:39:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427885.677453; Fri, 21 Oct 2022 16:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv3t-0002cD-V8; Fri, 21 Oct 2022 16:39:57 +0000
Received: by outflank-mailman (input) for mailman id 427885;
 Fri, 21 Oct 2022 16:39:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3s-0002bj-KP
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3s-0007CY-Jj
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:56 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv3s-00052u-Iy
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:39:56 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=/6DF+hXkRhxOsqmVam42si70EHKH0FwYMvqAfa9A/UY=; b=UFwfA8z7BDlAVtEBpqRMnLrkwp
	RFr+RlX1whB4Q/6EqqThJCaR4V0E8gqwBGnmw68NuzDHmfJOtnuUJBbisfj2eDZYvGb4JqZbaxZ/1
	xa0rxW09RxYlSwbMR0Qj/E0C7R/+tws3yOS4AH63LBPHm+4/H3jEETB8t8ibXPCItfaM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1olv3s-00052u-Iy@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:39:56 +0000

commit c7cff1188802646eaa38e918e5738da0e84949be
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 18 14:23:46 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Thu Oct 20 09:40:10 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Release-acked-by: George Dunlap <george.dunlap@citrix.com>
---
 xen/arch/arm/domain.c          |  2 +-
 xen/arch/arm/include/asm/p2m.h | 14 ++++++++++----
 xen/arch/arm/p2m.c             | 34 ++++++++++++++++++++++++++++++++--
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2c84e6dbbb..38e22f12af 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1064,7 +1064,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 42bfd548c4..c8f14d13c2 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -194,14 +194,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -266,6 +270,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6826f63150..00d05bb708 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1685,7 +1685,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1693,6 +1693,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1716,7 +1719,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1736,7 +1739,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1803,6 +1819,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:40:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427886.677456 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv44-0003RX-37; Fri, 21 Oct 2022 16:40:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427886.677456; Fri, 21 Oct 2022 16:40:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv44-0003RQ-0Y; Fri, 21 Oct 2022 16:40:08 +0000
Received: by outflank-mailman (input) for mailman id 427886;
 Fri, 21 Oct 2022 16:40:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv42-0003Mo-NP
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv42-0007Cv-MZ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:06 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv42-00054X-Lq
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:06 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=bmGQQTCV47yty9evDHPTk30E95sHYHTKkYUPG12irBc=; b=ZMyPLLUNnY5hMiTFbli69HvwwQ
	0BoLWjwSa0qkfu++HNE1apwd1HKQeuow7NuEyJmxbWrgKCKiSZOVYSAPZkymGQWQXb1YX4UAgVy8E
	CGf8w3hV3ImDrg+RpZtYrrp//BWNuKdRi0UUEJPeIUrjfCkJ6Sk3AXypkA2//UwJs1Uw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] test/vpci: add dummy cfcheck define
Message-Id: <E1olv42-00054X-Lq@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:40:06 +0000

commit b71419530d70d9b1f2ba524aabd27a9efe08f52f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Oct 20 16:36:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 20 16:36:48 2022 +0200

    test/vpci: add dummy cfcheck define
    
    Some vpci functions got the cfcheck attribute added, but that's not
    defined in the user-space test harness, so add a dummy define in order
    for the harness to build.
    
    Fixes: 4ed7d5525f ('xen/vpci: CFI hardening')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/tests/vpci/emul.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/tests/vpci/emul.h b/tools/tests/vpci/emul.h
index 2e1d3057c9..386b15eb86 100644
--- a/tools/tests/vpci/emul.h
+++ b/tools/tests/vpci/emul.h
@@ -37,6 +37,7 @@
 #define prefetch(x) __builtin_prefetch(x)
 #define ASSERT(x) assert(x)
 #define __must_check __attribute__((__warn_unused_result__))
+#define cf_check
 
 #include "list.h"
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:40:18 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:40:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427887.677460 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4E-0003U7-4p; Fri, 21 Oct 2022 16:40:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427887.677460; Fri, 21 Oct 2022 16:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4E-0003Tz-1w; Fri, 21 Oct 2022 16:40:18 +0000
Received: by outflank-mailman (input) for mailman id 427887;
 Fri, 21 Oct 2022 16:40:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4C-0003Tf-QH
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4C-0007D3-PV
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:16 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4C-00055K-Oo
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:16 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=3+7lf5eUF//uVtmbhOmbhX52QNCcHxMjhZRtWDmMQIk=; b=p2OBnOUHg4vmb8EyGR5N9R/H6L
	2GMJ7RuuzjNkx0O48G0VgElMB8ZdfEKWrKd382MJSNsagkgdsaAYNxwLL35oynB/4I2R3vwYMi//K
	a2XAreTM2ZhDLgUC5dTekR7a1s8uUF5Ldw3v5ojHc3qadQ7YFVgoUXIYgahNJJWxz5zY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] test/vpci: fix vPCI test harness to provide pci_get_pdev()
Message-Id: <E1olv4C-00055K-Oo@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:40:16 +0000

commit 1cfccd4b07dd1cf38290d930e2b687c031589db3
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Oct 20 16:37:15 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 20 16:37:15 2022 +0200

    test/vpci: fix vPCI test harness to provide pci_get_pdev()
    
    Instead of pci_get_pdev_by_domain(), which is no longer present in the
    hypervisor.
    
    While there add parentheses around the define value.
    
    Fixes: a37f9ea7a6 ('PCI: fold pci_get_pdev{,_by_domain}()')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/tests/vpci/emul.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/vpci/emul.h b/tools/tests/vpci/emul.h
index 386b15eb86..f03e3a56d1 100644
--- a/tools/tests/vpci/emul.h
+++ b/tools/tests/vpci/emul.h
@@ -92,7 +92,7 @@ typedef union {
 #define xmalloc(type) ((type *)malloc(sizeof(type)))
 #define xfree(p) free(p)
 
-#define pci_get_pdev_by_domain(...) &test_pdev
+#define pci_get_pdev(...) (&test_pdev)
 #define pci_get_ro_map(...) NULL
 
 #define test_bit(...) false
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:40:28 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:40:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427888.677465 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4O-0003XZ-8c; Fri, 21 Oct 2022 16:40:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427888.677465; Fri, 21 Oct 2022 16:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4O-0003XR-5t; Fri, 21 Oct 2022 16:40:28 +0000
Received: by outflank-mailman (input) for mailman id 427888;
 Fri, 21 Oct 2022 16:40:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4M-0003XG-TC
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4M-0007D7-SQ
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:26 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4M-000568-Rl
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:26 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=loQt2VfOXTjaMYwJkHVgYwlFkpOTO3eY+o8+grQwQik=; b=08dnqsU94AnQQCCThhbod5x+R/
	udswR8z2QrNwqhlKwNbD1fvLxQ3No3Vz+v5AD8rue/LnqyVXO+38xDEOytOI8E1UyVWinEneYiyDT
	d9eLxU8EqB6jKlRYArMyx82Wx1/k9+Kztt1+5MB+wgNp26L2Fp5GxFKyZclY54UkKqTI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] test/vpci: enable by default
Message-Id: <E1olv4M-000568-Rl@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:40:26 +0000

commit e9444d87427a1ac4518ee0a62da5d8803262c6cb
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Thu Oct 20 16:37:29 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 20 16:37:29 2022 +0200

    test/vpci: enable by default
    
    CONFIG_HAS_PCI is not defined for the tools build, and as a result the
    vpci harness would never get build.  Fix this by building it
    unconditionally, there's nothing arch specific in it.
    
    Reported-by: Andrew Cooper <Andrew.Cooper3@citrix.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/tests/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/Makefile b/tools/tests/Makefile
index 33e32730c4..d99146d56a 100644
--- a/tools/tests/Makefile
+++ b/tools/tests/Makefile
@@ -10,7 +10,7 @@ SUBDIRS-$(CONFIG_X86) += x86_emulator
 endif
 SUBDIRS-y += xenstore
 SUBDIRS-y += depriv
-SUBDIRS-$(CONFIG_HAS_PCI) += vpci
+SUBDIRS-y += vpci
 
 .PHONY: all clean install distclean uninstall
 all clean distclean install uninstall: %: subdirs-%
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:40:38 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427889.677469 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4Y-0003aS-AC; Fri, 21 Oct 2022 16:40:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427889.677469; Fri, 21 Oct 2022 16:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4Y-0003aK-7Q; Fri, 21 Oct 2022 16:40:38 +0000
Received: by outflank-mailman (input) for mailman id 427889;
 Fri, 21 Oct 2022 16:40:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4W-0003aA-W4
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4W-0007DB-VK
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:36 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4W-00056v-UU
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:36 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=xWFslnmosA6FZ6tuAafu3Ajsxk4Tml2w65juSLBYYnE=; b=EhY1xVB68TaM5mZmPDUnJYINOl
	mx7cAA5hb7f6ZIT9J7UT0nHW7VK3Tk+5LpRt9M+xioAhO51Wxe6ojTQL4LUj7+u6GYcrSPoOxID0h
	EE1qt5OjtaPcjDzaHAwwQaxjWRfUj1fQixF+wHtqNb5XPadxD1UL5aE9Es/0tqwx1jLE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/oxenstored: Fix Oxenstored Live Update
Message-Id: <E1olv4W-00056v-UU@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:40:36 +0000

commit 7110192b1df697be84a50f741651d4c3cb129504
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Wed Oct 19 18:12:33 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 20 15:48:22 2022 +0100

    tools/oxenstored: Fix Oxenstored Live Update
    
    tl;dr This hunk was part of the patch emailed to xen-devel, but was missing
    from what ultimately got committed.
    
    https://lore.kernel.org/xen-devel/4164cb728313c3b9fc38cf5e9ecb790ac93a9600.1610748224.git.edvin.torok@citrix.com/
    is the patch in question, but was part of a series that had threading issues.
    I have a vague recollection that I sourced the commits from a local branch,
    which clearly wasn't as up-to-date as I had thought.
    
    Either way, it's my fault/mistake, and this hunk should have been part of what
    got comitted.
    
    Fixes: 00c48f57ab36 ("tools/oxenstored: Start live update process")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/xenstored/xenstored.ml | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index d44ae673c4..fc90fcdeb5 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -352,6 +352,11 @@ let _ =
 		rw_sock
 	) in
 
+	(* required for xenstore-control to detect availability of live-update *)
+	Store.mkdir store Perms.Connection.full_rights (Store.Path.of_string "/tool");
+	Store.write store Perms.Connection.full_rights
+		(Store.Path.of_string "/tool/xenstored") Sys.executable_name;
+
 	Sys.set_signal Sys.sighup (Sys.Signal_handle sighup_handler);
 	Sys.set_signal Sys.sigterm (Sys.Signal_handle (fun _ ->
 		info "Received SIGTERM";
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 21 16:40:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Oct 2022 16:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.427890.677473 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4h-0003ct-BX; Fri, 21 Oct 2022 16:40:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 427890.677473; Fri, 21 Oct 2022 16:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1olv4h-0003cm-8o; Fri, 21 Oct 2022 16:40:47 +0000
Received: by outflank-mailman (input) for mailman id 427890;
 Fri, 21 Oct 2022 16:40:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4h-0003cg-30
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4h-0007Da-2L
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:47 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1olv4h-00057k-1Q
 for xen-changelog@lists.xenproject.org; Fri, 21 Oct 2022 16:40:47 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=YsiZoTor1BZClQ2RmHOZi9zDIX1KOlw39GjlhyI6ns8=; b=PZzlUX3yKKFR2bvHzXnAgvbjjE
	9tjGsyzWW9vQZ+1ia8HsWwOq9koDQGQKWFUjRuagk8u2KsSWIS7+r1fqQ4s2HrLxe0C15S1Mw2bWW
	FO5BAmbwlnbCEYQb6GUXIAQnXwwajJZ3nOpiH06Z4UattDicNC7grug+NGFxDuf3tZS8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/xendomains: Restrict domid pattern in LIST_GREP
Message-Id: <E1olv4h-00057k-1Q@xenbits.xenproject.org>
Date: Fri, 21 Oct 2022 16:40:47 +0000

commit 0c06760be3dc3f286015e18c4b1d1694e55da026
Author:     Peter Hoyes <Peter.Hoyes@arm.com>
AuthorDate: Mon Oct 3 15:42:16 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Thu Oct 20 17:38:56 2022 +0100

    tools/xendomains: Restrict domid pattern in LIST_GREP
    
    The xendomains script uses the output of `xl list -l` to collect the
    id and name of each domain, which is used in the shutdown logic, amongst
    other purposes.
    
    The linked commit added a "domid" field to libxl_domain_create_info.
    This causes the output of `xl list -l` to contain two "domid"s per
    domain, which may not be equal. This in turn causes `xendomains stop` to
    issue two shutdown commands per domain, one of which is to a duplicate
    and/or invalid domid.
    
    To work around this, make the LIST_GREP pattern more restrictive for
    domid, so it only detects the domid at the top level and not the domid
    inside c_info.
    
    Fixes: 4a3a25678d92 ("libxl: allow creation of domains with a specified or random domid")
    Signed-off-by: Peter Hoyes <Peter.Hoyes@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/hotplug/Linux/xendomains.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/Linux/xendomains.in b/tools/hotplug/Linux/xendomains.in
index 334d244882..70f4129ef4 100644
--- a/tools/hotplug/Linux/xendomains.in
+++ b/tools/hotplug/Linux/xendomains.in
@@ -211,7 +211,7 @@ get_xsdomid()
     fi
 }
 
-LIST_GREP='(domain\|(domid\|(name\|^    {$\|"name":\|"domid":'
+LIST_GREP='(domain\|(domid\|(name\|^    {$\|"name":\|^        "domid":'
 parseln()
 {
     if [[ "$1" =~ '(domain' ]] || [[ "$1" = "{" ]]; then
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 22 10:33:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Oct 2022 10:33:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.428187.678083 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBoM-0005Ta-4o; Sat, 22 Oct 2022 10:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 428187.678083; Sat, 22 Oct 2022 10:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBoM-0005TR-1q; Sat, 22 Oct 2022 10:33:02 +0000
Received: by outflank-mailman (input) for mailman id 428187;
 Sat, 22 Oct 2022 10:33:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoL-0005TJ-HI
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoL-0000iL-GX
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:01 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoL-0006hG-Eh
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:01 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=XEja3qiXpoLsCGqadXGIqQrI0dNamiqiyt+5tajLNA4=; b=xuwhzAIUFH1ZdC3Q2H4x2JvJfL
	uHqL5u7zMeAy20IqdAeBy9wq81S1NDLE9QtSjcIBzaLSpicpy2ushKRrz+99/15JV5YyqYjgvy9XJ
	d7ujRhMXWWkLNd/gRAueqHxoM5lBBtjXPD/Sv00z5SiUfBvZ4aUHNekVioP/jS+x58rc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] tools/ocaml/xenstored: fix live update exception
Message-Id: <E1omBoL-0006hG-Eh@xenbits.xenproject.org>
Date: Sat, 22 Oct 2022 10:33:01 +0000

commit f838b956779ff8a0b94636462f3c6d95c3adeb73
Author:     Edwin Török <edvin.torok@citrix.com>
AuthorDate: Fri Oct 21 08:59:25 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Fri Oct 21 10:28:12 2022 +0100

    tools/ocaml/xenstored: fix live update exception
    
    During live update we will load the /tool/xenstored path from the previous binary,
    and then try to mkdir /tool again which will fail with EEXIST.
    Check for existence of the path before creating it.
    
    The write call to /tool/xenstored should not need any changes
    (and we do want to overwrite any previous path, in case it changed).
    
    Prior to 7110192b1df6 live update would work only if the binary path was
    specified, and with 7110192b1df6 and this live update also works when
    no binary path is specified in `xenstore-control live-update`.
    
    Fixes: 7110192b1df6 ("tools/oxenstored: Fix Oxenstored Live Update")
    Signed-off-by: Edwin Török <edvin.torok@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/ocaml/xenstored/xenstored.ml | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index fc90fcdeb5..acc7290627 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -353,7 +353,9 @@ let _ =
 	) in
 
 	(* required for xenstore-control to detect availability of live-update *)
-	Store.mkdir store Perms.Connection.full_rights (Store.Path.of_string "/tool");
+	let tool_path = Store.Path.of_string "/tool" in
+	if not (Store.path_exists store tool_path) then
+		Store.mkdir store Perms.Connection.full_rights tool_path;
 	Store.write store Perms.Connection.full_rights
 		(Store.Path.of_string "/tool/xenstored") Sys.executable_name;
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 22 10:33:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Oct 2022 10:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.428188.678087 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBoX-0005WQ-64; Sat, 22 Oct 2022 10:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 428188.678087; Sat, 22 Oct 2022 10:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBoX-0005WH-3L; Sat, 22 Oct 2022 10:33:13 +0000
Received: by outflank-mailman (input) for mailman id 428188;
 Sat, 22 Oct 2022 10:33:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoV-0005W3-LL
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoV-0000ii-KX
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:11 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoV-0006hf-Im
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:11 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=br+tsSNuVpdIKWl4vDeq7Xhkz6Q+0K6RxgrBkdSz5Fw=; b=tpfmoXhlALvdzvl8jlsp9HnSbM
	E+qgOPXna7P/OODwDCgJSqxhbPyC9USMZuYOg0G6enpJnfvXMyWy+hAquZTJqFCEGjNxMB+r/CZEQ
	SE07frRmc61ZJ8WiPf5/05YXmtFfuuZx9f20ubK4AUW07BtyxoYjCmBUe8+4YPi1Woj4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: mark handle_linux_pci_domain() __init
Message-Id: <E1omBoV-0006hf-Im@xenbits.xenproject.org>
Date: Sat, 22 Oct 2022 10:33:11 +0000

commit e0347046445a2c6245f6a04093e7e831100611a1
Author:     Stewart Hildebrand <stewart.hildebrand@amd.com>
AuthorDate: Fri Oct 14 16:09:26 2022 -0400
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 21 11:09:59 2022 +0100

    xen/arm: mark handle_linux_pci_domain() __init
    
    All functions in domain_build.c should be marked __init. This was
    spotted when building the hypervisor with -Og.
    
    Fixes: 1050a7b91c2e ("xen/arm: add pci-domain for disabled devices")
    Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/domain_build.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index db97536fe8..4fb5c20b13 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1051,8 +1051,8 @@ static void __init assign_static_memory_11(struct domain *d,
  * The current heuristic assumes that a device is a host bridge
  * if the type is "pci" and then parent type is not "pci".
  */
-static int handle_linux_pci_domain(struct kernel_info *kinfo,
-                                   const struct dt_device_node *node)
+static int __init handle_linux_pci_domain(struct kernel_info *kinfo,
+                                          const struct dt_device_node *node)
 {
     uint16_t segment;
     int res;
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 22 10:33:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Oct 2022 10:33:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.428189.678091 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBoh-0005Zu-7U; Sat, 22 Oct 2022 10:33:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 428189.678091; Sat, 22 Oct 2022 10:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBoh-0005Zn-4j; Sat, 22 Oct 2022 10:33:23 +0000
Received: by outflank-mailman (input) for mailman id 428189;
 Sat, 22 Oct 2022 10:33:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBof-0005ZV-OA
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBof-0000it-NT
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:21 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBof-0006i6-Mc
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:21 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=M5vUoXBSgO5FvkYDPfKxq8W6y0WB0wqRxGBG2IwSQAI=; b=s5uD3UAQR4m6pdCGM4aIYgHz3T
	Jmq18EyQabMe2MS2bsNQsH1I/mB6TsVVH4DDQkdjIAtQuoV1dtsxMkg95cr5N1cBNIpKd2JfZ4zPI
	lEzNzGFo3HYyDTK5Eo/gGnPq49wd/hfkNW0SyQd+1oHKEh7HmHT2OPORgSM5nTy/L7D0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/arm: p2m: fix pa_range_info for 52-bit pa range
Message-Id: <E1omBof-0006i6-Mc@xenbits.xenproject.org>
Date: Sat, 22 Oct 2022 10:33:21 +0000

commit 974c8d810a1daacb3322015cd1c124d26155fc75
Author:     Xenia Ragiadakou <burzalodowa@gmail.com>
AuthorDate: Wed Oct 19 17:49:13 2022 +0300
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Fri Oct 21 11:15:25 2022 +0100

    xen/arm: p2m: fix pa_range_info for 52-bit pa range
    
    Currently, the fields 'root_order' and 'sl0' of the pa_range_info for
    the 52-bit pa range have the values 3 and 3, respectively.
    This configuration does not match any of the valid root table configurations
    for 4KB granule and t0sz 12, described in ARM DDI 0487I.a D8.2.7.
    
    More specifically, according to ARM DDI 0487I.a D8.2.7, in order to support
    the 52-bit pa size with 4KB granule, the p2m root table needs to be configured
    either as a single table at level -1 or as 16 concatenated tables at level 0.
    Since, currently there is no support for level -1, set the 'root_order' and
    'sl0' fields of the 52-bit pa_range_info according to the second approach.
    
    Note that the values of those fields are not used so far. This patch updates
    their values only for the sake of correctness.
    
    Fixes: 407b13a71e32 ("xen/arm: p2m don't fall over on FEAT_LPA enabled hw")
    Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 00d05bb708..94d3b60b13 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2281,7 +2281,7 @@ void __init setup_virt_paging(void)
         [3] = { 42,      22/*22*/,  3,          1 },
         [4] = { 44,      20/*20*/,  0,          2 },
         [5] = { 48,      16/*16*/,  0,          2 },
-        [6] = { 52,      12/*12*/,  3,          3 },
+        [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
     };
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 22 10:33:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Oct 2022 10:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.428190.678095 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBor-0005cc-95; Sat, 22 Oct 2022 10:33:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 428190.678095; Sat, 22 Oct 2022 10:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBor-0005cU-6M; Sat, 22 Oct 2022 10:33:33 +0000
Received: by outflank-mailman (input) for mailman id 428190;
 Sat, 22 Oct 2022 10:33:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBop-0005cG-TX
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBop-0000jH-Qu
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:31 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBop-0006iZ-Pe
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:31 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=jROq+hKjZvolcooIMUQX7xrePRwBYqswq8qiN88Hunw=; b=Tx1JgabNgBLm6KyxyTXkTBhO3u
	3va7PeXvaePD/rfpbNIMZ50qOrnVfnkOsNaS6kj88O2FB/bPbsAFQI7nkuNBlwDaVypW38EZ2Jdjq
	Ya4ykZIRkBnWeylbIZFYiPMaimhYQEz/dwTydZOYus/UfJPKFSGdm5qbFpK8ZTukRA0g=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] EFI: don't convert memory marked for runtime use to ordinary RAM
Message-Id: <E1omBop-0006iZ-Pe@xenbits.xenproject.org>
Date: Sat, 22 Oct 2022 10:33:31 +0000

commit f324300c8347b6aa6f9c0b18e0a90bbf44011a9a
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Oct 21 12:30:24 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 21 12:30:24 2022 +0200

    EFI: don't convert memory marked for runtime use to ordinary RAM
    
    efi_init_memory() in both relevant places is treating EFI_MEMORY_RUNTIME
    higher priority than the type of the range. To avoid accessing memory at
    runtime which was re-used for other purposes, make
    efi_arch_process_memory_map() follow suit. While in theory the same would
    apply to EfiACPIReclaimMemory, we don't actually "reclaim" or clobber
    that memory (converted to E820_ACPI on x86) there (and it would be a bug
    if the Dom0 kernel tried to reclaim the range, bypassing Xen's memory
    management, plus it would be at least bogus if it clobbered that space),
    hence that type's handling can be left alone.
    
    Fixes: bf6501a62e80 ("x86-64: EFI boot code")
    Fixes: facac0af87ef ("x86-64: EFI runtime code")
    Fixes: 6d70ea10d49f ("Add ARM EFI boot support")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/efi/efi-boot.h | 3 ++-
 xen/arch/x86/efi/efi-boot.h | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index 59d93c24a1..43a836c3a7 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -183,7 +183,8 @@ static EFI_STATUS __init efi_process_memory_map_bootinfo(EFI_MEMORY_DESCRIPTOR *
 
     for ( Index = 0; Index < (mmap_size / desc_size); Index++ )
     {
-        if ( desc_ptr->Attribute & EFI_MEMORY_WB &&
+        if ( !(desc_ptr->Attribute & EFI_MEMORY_RUNTIME) &&
+             (desc_ptr->Attribute & EFI_MEMORY_WB) &&
              (desc_ptr->Type == EfiConventionalMemory ||
               desc_ptr->Type == EfiLoaderCode ||
               desc_ptr->Type == EfiLoaderData ||
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 836e8c2ba1..e82ac9daa7 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -185,7 +185,9 @@ static void __init efi_arch_process_memory_map(EFI_SYSTEM_TABLE *SystemTable,
             /* fall through */
         case EfiLoaderCode:
         case EfiLoaderData:
-            if ( desc->Attribute & EFI_MEMORY_WB )
+            if ( desc->Attribute & EFI_MEMORY_RUNTIME )
+                type = E820_RESERVED;
+            else if ( desc->Attribute & EFI_MEMORY_WB )
                 type = E820_RAM;
             else
         case EfiUnusableMemory:
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 22 10:33:43 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Oct 2022 10:33:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.428192.678100 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBp1-0005fr-BP; Sat, 22 Oct 2022 10:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 428192.678100; Sat, 22 Oct 2022 10:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omBp1-0005fj-7s; Sat, 22 Oct 2022 10:33:43 +0000
Received: by outflank-mailman (input) for mailman id 428192;
 Sat, 22 Oct 2022 10:33:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBp0-0005fV-0K
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoz-0000ja-Uz
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:41 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omBoz-0006iy-T8
 for xen-changelog@lists.xenproject.org; Sat, 22 Oct 2022 10:33:41 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=yTK6qX4tVL8Vw/GPCO511nSgBfO5gWa8c9O9nrTGdEU=; b=NeoP3fv9DmOEfVhzR78X3duHT5
	0m79/QfPW3B57D1AdZhL73/KDvJgGB7BNL37Bbbno+2XaQBSZ3eg5aZ23o3W7KRcL1kZ88hFAsGsy
	rp+w9CLnqWs45TA/+4jJ0mY1ua5nD9PoGqDUf0I3JXEW9WRlZZs2JaPIslGHLdyj1sy8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/sched: fix race in RTDS scheduler
Message-Id: <E1omBoz-0006iy-T8@xenbits.xenproject.org>
Date: Sat, 22 Oct 2022 10:33:41 +0000

commit 73c62927f64ecb48f27d06176befdf76b879f340
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Fri Oct 21 12:32:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 21 12:32:23 2022 +0200

    xen/sched: fix race in RTDS scheduler
    
    When a domain gets paused the unit runnable state can change to "not
    runnable" without the scheduling lock being involved. This means that
    a specific scheduler isn't involved in this change of runnable state.
    
    In the RTDS scheduler this can result in an inconsistency in case a
    unit is losing its "runnable" capability while the RTDS scheduler's
    scheduling function is active. RTDS will remove the unit from the run
    queue, but doesn't do so for the replenish queue, leading to hitting
    an ASSERT() in replq_insert() later when the domain is unpaused again.
    
    Fix that by removing the unit from the replenish queue as well in this
    case.
    
    Fixes: 7c7b407e7772 ("xen/sched: introduce unit_runnable_state()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/sched/rt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index d6de25531b..960a8033e2 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -1087,6 +1087,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit,
         else if ( !unit_runnable_state(snext->unit) )
         {
             q_remove(snext);
+            replq_remove(ops, snext);
             snext = rt_unit(sched_idle_unit(sched_cpu));
         }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Mon Oct 24 10:22:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Oct 2022 10:22:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.428818.679348 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omuar-0007up-Kf; Mon, 24 Oct 2022 10:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 428818.679348; Mon, 24 Oct 2022 10:22:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omuar-0007uh-I3; Mon, 24 Oct 2022 10:22:05 +0000
Received: by outflank-mailman (input) for mailman id 428818;
 Mon, 24 Oct 2022 10:22:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omuap-0007ub-W8
 for xen-changelog@lists.xenproject.org; Mon, 24 Oct 2022 10:22:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omuap-0000JT-UH
 for xen-changelog@lists.xenproject.org; Mon, 24 Oct 2022 10:22:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omuap-0000ii-Sp
 for xen-changelog@lists.xenproject.org; Mon, 24 Oct 2022 10:22:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=kWyI4j6PkKqpoEscYlhmJ0gaHOSzwQqI0owyh/4wwDI=; b=HWdE5fp6VD4Hy0VFFFBHUHz7Z3
	e4cllmXx2IRpjPA/Icrzsurjx3SwbQ3IBUMyZfrHZubXc9GoMo+5Em1IBjbfJq9fhQHwaVyH5amrw
	XAUaO2tUdsxWyU0iCTYT+fIpE4fV4C8ES9i8PnaaqAOzXI9PUzPgDje86Os7boSG94s8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] xen/sched: fix restore_vcpu_affinity() by removing it
Message-Id: <E1omuap-0000ii-Sp@xenbits.xenproject.org>
Date: Mon, 24 Oct 2022 10:22:03 +0000

commit fce1f381f7388daaa3e96dbb0d67d7a3e4bb2d2d
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Fri Oct 21 12:50:26 2022 +0200
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Mon Oct 24 11:16:27 2022 +0100

    xen/sched: fix restore_vcpu_affinity() by removing it
    
    When the system is coming up after having been suspended,
    restore_vcpu_affinity() is called for each domain in order to adjust
    the vcpu's affinity settings in case a cpu didn't come to live again.
    
    The way restore_vcpu_affinity() is doing that is wrong, because the
    specific scheduler isn't being informed about a possible migration of
    the vcpu to another cpu. Additionally the migration is often even
    happening if all cpus are running again, as it is done without check
    whether it is really needed.
    
    As cpupool management is already calling cpu_disable_scheduler() for
    cpus not having come up again, and cpu_disable_scheduler() is taking
    care of eventually needed vcpu migration in the proper way, there is
    simply no need for restore_vcpu_affinity().
    
    So just remove restore_vcpu_affinity() completely, together with the
    no longer used sched_reset_affinity_broken().
    
    Fixes: 8a04eaa8ea83 ("xen/sched: move some per-vcpu items to struct sched_unit")
    Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/acpi/power.c |  3 --
 xen/common/sched/core.c   | 78 -----------------------------------------------
 xen/include/xen/sched.h   |  1 -
 3 files changed, 82 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 1bb4d78392..b76f673acb 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -159,10 +159,7 @@ static void thaw_domains(void)
 
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
-    {
-        restore_vcpu_affinity(d);
         domain_unpause(d);
-    }
     rcu_read_unlock(&domlist_read_lock);
 }
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 83455fbde1..23fa6845a8 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1188,84 +1188,6 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit)
     return false;
 }
 
-static void sched_reset_affinity_broken(const struct sched_unit *unit)
-{
-    struct vcpu *v;
-
-    for_each_sched_unit_vcpu ( unit, v )
-        v->affinity_broken = false;
-}
-
-void restore_vcpu_affinity(struct domain *d)
-{
-    unsigned int cpu = smp_processor_id();
-    struct sched_unit *unit;
-
-    ASSERT(system_state == SYS_STATE_resume);
-
-    rcu_read_lock(&sched_res_rculock);
-
-    for_each_sched_unit ( d, unit )
-    {
-        spinlock_t *lock;
-        unsigned int old_cpu = sched_unit_master(unit);
-        struct sched_resource *res;
-
-        ASSERT(!unit_runnable(unit));
-
-        /*
-         * Re-assign the initial processor as after resume we have no
-         * guarantee the old processor has come back to life again.
-         *
-         * Therefore, here, before actually unpausing the domains, we should
-         * set v->processor of each of their vCPUs to something that will
-         * make sense for the scheduler of the cpupool in which they are in.
-         */
-        lock = unit_schedule_lock_irq(unit);
-
-        cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                    cpupool_domain_master_cpumask(d));
-        if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-        {
-            if ( sched_check_affinity_broken(unit) )
-            {
-                sched_set_affinity(unit, unit->cpu_hard_affinity_saved, NULL);
-                sched_reset_affinity_broken(unit);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-
-            if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-            {
-                /* Affinity settings of one vcpu are for the complete unit. */
-                printk(XENLOG_DEBUG "Breaking affinity for %pv\n",
-                       unit->vcpu_list);
-                sched_set_affinity(unit, &cpumask_all, NULL);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-        }
-
-        res = get_sched_res(cpumask_any(cpumask_scratch_cpu(cpu)));
-        sched_set_res(unit, res);
-
-        spin_unlock_irq(lock);
-
-        /* v->processor might have changed, so reacquire the lock. */
-        lock = unit_schedule_lock_irq(unit);
-        res = sched_pick_resource(unit_scheduler(unit), unit);
-        sched_set_res(unit, res);
-        spin_unlock_irq(lock);
-
-        if ( old_cpu != sched_unit_master(unit) )
-            sched_move_irqs(unit);
-    }
-
-    rcu_read_unlock(&sched_res_rculock);
-
-    domain_update_node_affinity(d);
-}
-
 /*
  * This function is used by cpu_hotplug code via cpu notifier chain
  * and from cpupools to switch schedulers on a cpu.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 557b3229f6..072e4846aa 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1019,7 +1019,6 @@ void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Mon Oct 24 13:55:12 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Oct 2022 13:55:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.429163.680030 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omxv0-00057K-1e; Mon, 24 Oct 2022 13:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 429163.680030; Mon, 24 Oct 2022 13:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1omxuz-00057D-VO; Mon, 24 Oct 2022 13:55:05 +0000
Received: by outflank-mailman (input) for mailman id 429163;
 Mon, 24 Oct 2022 13:55:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omxuy-00053F-Bq
 for xen-changelog@lists.xenproject.org; Mon, 24 Oct 2022 13:55:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omxuy-00043j-8N
 for xen-changelog@lists.xenproject.org; Mon, 24 Oct 2022 13:55:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1omxuy-0004du-7N
 for xen-changelog@lists.xenproject.org; Mon, 24 Oct 2022 13:55:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=76JPNDQSGPnTSvGq6UBNX9EafFkzorMG3+x4TH1fhis=; b=kpzgFJRE+qmshllCGOg3WVpYfJ
	nqPwNddTK5WcoAhmCezXTJSSRaRU+/0AX9pYiUfgRB0WcGyjv2s1UChsigvAXmnM+5+v4a3tIQ6F9
	D8MeQQM2sj9MRqT4QNpytgoiM9z2bd9rmez04gikckr2E/FtqzUTWKZrVqDxGk1GYmOk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/shadow: drop (replace) bogus assertions
Message-Id: <E1omxuy-0004du-7N@xenbits.xenproject.org>
Date: Mon, 24 Oct 2022 13:55:04 +0000

commit a92dc2bb30ba65ae25d2f417677eb7ef9a6a0fef
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 24 15:46:11 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 24 15:46:11 2022 +0200

    x86/shadow: drop (replace) bogus assertions
    
    The addition of a call to shadow_blow_tables() from shadow_teardown()
    has resulted in the "no vcpus" related assertion becoming triggerable:
    If domain_create() fails with at least one page successfully allocated
    in the course of shadow_enable(), or if domain_create() succeeds and
    the domain is then killed without ever invoking XEN_DOMCTL_max_vcpus.
    Note that in-tree tests (test-resource and test-tsx) do exactly the
    latter of these two.
    
    The assertion's comment was bogus anyway: Shadow mode has been getting
    enabled before allocation of vCPU-s for quite some time. Convert the
    assertion to a conditional: As long as there are no vCPU-s, there's
    nothing to blow away.
    
    Fixes: e7aa55c0aab3 ("x86/p2m: free the paging memory pool preemptively")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    
    A similar assertion/comment pair exists in _shadow_prealloc(); the
    comment is similarly bogus, and the assertion could in principle trigger
    e.g. when shadow_alloc_p2m_page() is called early enough. Replace those
    at the same time by a similar early return, here indicating failure to
    the caller (which will generally lead to the domain being crashed in
    shadow_prealloc()).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/mm/shadow/common.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index d985d51614..badfd53c6b 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -943,8 +943,9 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
         /* No reclaim when the domain is dying, teardown will take care of it. */
         return false;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to reclaim when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return false;
 
     /* Stage one: walk the list of pinned pages, unpinning them */
     perfc_incr(shadow_prealloc_1);
@@ -1034,8 +1035,9 @@ void shadow_blow_tables(struct domain *d)
     mfn_t smfn;
     int i;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to do when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return;
 
     /* Pass one: unpin all pinned pages */
     foreach_pinned_shadow(d, sp, t)
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 13:44:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 13:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.429859.681135 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onKDt-0004M3-GD; Tue, 25 Oct 2022 13:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 429859.681135; Tue, 25 Oct 2022 13:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onKDt-0004Lv-DO; Tue, 25 Oct 2022 13:44:05 +0000
Received: by outflank-mailman (input) for mailman id 429859;
 Tue, 25 Oct 2022 13:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onKDs-0004Lp-GT
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 13:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onKDs-0003zP-EC
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 13:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onKDs-0000r7-DA
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 13:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=y1wmzTZxrtZUN/zGvjIEDG2PFXXWLt5heCqhmSebb10=; b=h1A13dfR2SHkBpRvBLjBYybs5E
	SF6FdmS0/iXarXsBn+bwLhF9f3E6OHjHsb8D/1UvYODdNrL6RIZdzG7K2nQz7h7zP3E6RCduEqskr
	qX+2gxRWhfDOmZL55a4AYhEZp6yrePqQAkEX2iSWvqmJwAbnE/TSJ/pnQsvInNdwgT88=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] libs/light: Fix build, fix missing _libxl_types_json.h
Message-Id: <E1onKDs-0000r7-DA@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 13:44:04 +0000

commit 4ff0811a2b0d1c715f54550f9a3632195bb6b21f
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Tue Oct 25 12:16:32 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Tue Oct 25 13:36:40 2022 +0100

    libs/light: Fix build, fix missing _libxl_types_json.h
    
    Make may not have copied "_libxl_types_json.h" into $(XEN_INCLUDE)
    before starting to build the different objects.
    
    Make sure that the generated headers are copied into $(XEN_INCLUDE)
    before using them. This is achieved by telling make about which
    headers are needed to use "libxl_internal.h" which use "libxl_json.h"
    which uses "_libxl_types_json.h". "libxl_internal.h" also uses
    "libxl.h" so add it to the list.
    
    This also prevent `gcc` from using a potentially installed headers
    from a previous version of Xen.
    
    Reported-by: Per Bilse <per.bilse@citrix.com>
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index d681269229..374be1cfab 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -209,6 +209,7 @@ _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
 
 $(XEN_INCLUDE)/libxl.h: $(XEN_INCLUDE)/_libxl_types.h
 $(XEN_INCLUDE)/libxl_json.h: $(XEN_INCLUDE)/_libxl_types_json.h
+libxl_internal.h: $(XEN_INCLUDE)/libxl.h $(XEN_INCLUDE)/libxl_json.h
 libxl_internal.h: _libxl_types_internal.h _libxl_types_private.h _libxl_types_internal_private.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:00:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430199.681592 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ5m-0002Lq-LJ; Tue, 25 Oct 2022 20:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430199.681592; Tue, 25 Oct 2022 20:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ5m-0002LQ-IA; Tue, 25 Oct 2022 20:00:06 +0000
Received: by outflank-mailman (input) for mailman id 430199;
 Tue, 25 Oct 2022 20:00:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ5k-00022H-JB
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ5k-0002dV-IQ
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ5k-00062L-Ga
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=nojguMGC+q7yyizXnQceVmMO0/L734W5D7VZpwOaN5s=; b=HSoslaVsShvjb4XvyJtErZ4tiO
	f5zFbpubKTX5GJYKtcQl30I/WOOGfTGnwcyadRlXDsvBi4vodTD7jXaH1llk0hzqWPNHwPzRe5K3B
	cW3/AmMbG4YXQDU2MFvWJRxzhrf3Ym58zR331qKX9iSSWQUrtWAJxZc81erIxXjVHjCc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] arm/p2m: Rework p2m_init()
Message-Id: <E1onQ5k-00062L-Ga@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:00:04 +0000

commit 86cb37447548420e41ff953a7372972f6154d6d1
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:21:11 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:52:43 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index b2d856a801..4f7d923ad9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1730,7 +1730,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1739,11 +1739,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1759,8 +1754,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1773,13 +1766,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:00:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430200.681597 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ5w-0002SU-MN; Tue, 25 Oct 2022 20:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430200.681597; Tue, 25 Oct 2022 20:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ5w-0002SM-Jk; Tue, 25 Oct 2022 20:00:16 +0000
Received: by outflank-mailman (input) for mailman id 430200;
 Tue, 25 Oct 2022 20:00:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ5u-0002S5-Mh
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ5u-0002dh-Lw
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ5u-00064M-Kl
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+ZV/gq9ICyFFjA9AOo+7E9QBGvrX1wjdp14OisN7+aY=; b=nqZpLjhphrewXcpQOEHTGjauPx
	ruMVyw2SNwwDhO3xPWFEV2OTC5SvmgaKleU8+kAVMD/bSaqhCMXrLWh3hfg2jWiKbE1EvXKgmNuZX
	mmeAyJIHvvYqrTyvSWLmqzPqf6dmPhiQwoQbC17w3jzVnwqm6eReZrTofCPHge10B26g=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1onQ5u-00064M-Kl@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:00:14 +0000

commit e5a5bdeba6a0c3eacd2ba39c1ee36b3c54e77dca
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:21:12 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:54:26 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index a818f33a1a..c7feaa323a 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1059,7 +1059,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4f7d923ad9..6f87e17c1d 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1661,7 +1661,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1669,6 +1669,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1692,7 +1695,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1712,7 +1715,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1779,6 +1795,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index c9598740bd..b2725206e8 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -194,14 +194,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -266,6 +270,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:00:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:00:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430201.681601 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ65-0002Vb-Nu; Tue, 25 Oct 2022 20:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430201.681601; Tue, 25 Oct 2022 20:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ65-0002VT-LL; Tue, 25 Oct 2022 20:00:25 +0000
Received: by outflank-mailman (input) for mailman id 430201;
 Tue, 25 Oct 2022 20:00:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ65-0002VN-4L
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ65-0002ds-2K
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ65-00066F-1V
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=cC+3ebUtjo29KUyB24x8ng/u/m/IbDP2eduX5NgySIQ=; b=vX47bEpOol6EZIUMXROLx9nsgc
	XKa62yg+9fLGMJhhDCoumXy6nGM6jzn9wx8Mx0T5b9OC52wKMK8S/wE8QeLVzEnUZLbYjbb/2ZuNk
	18diQ4L+hq316XkvId3o+Af+tUQANSfBD8uBTlBeM+TNtTP3GZjRxYkGXArAxPy/toQs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] arm/p2m: Rework p2m_init()
Message-Id: <E1onQ65-00066F-1V@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:00:25 +0000

commit 6f948fd1929c01b82a119f03670cab38ffebb47e
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:21:11 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:57:58 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c1055ff2a7..25eb1d84cb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1733,7 +1733,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1742,11 +1742,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1762,8 +1757,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1776,13 +1769,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:00:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:00:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430202.681605 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ6F-0002YF-Pq; Tue, 25 Oct 2022 20:00:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430202.681605; Tue, 25 Oct 2022 20:00:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ6F-0002Y7-Mt; Tue, 25 Oct 2022 20:00:35 +0000
Received: by outflank-mailman (input) for mailman id 430202;
 Tue, 25 Oct 2022 20:00:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6F-0002Xz-6c
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6F-0002e2-5v
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6F-000674-4a
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=YyyYp7YI3dHJCO+D6iQYi6VQpH4cfMyGGj+a+K6ewyA=; b=NDzChCKexm9x+U1V+p8quWz0qY
	ep1B8SGbNub9zH12xZTk4r4SsHLUg9GW4luzeRLem0514r0maOYRp1QVJc9XqarKMReLmrJgJjOOe
	+T+Uxg0n3Tsj7RCc2gPkvHpEmAN5fIsOgTxtpRoewFU86mCec8JF282QR+ZTgYsAiN1g=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1onQ6F-000674-4a@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:00:35 +0000

commit f8915cd5dbe0f51e9bb31a54fe40600b839dd707
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:21:12 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:57:59 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index a5ffd952ec..b11359b8cc 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1041,7 +1041,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 25eb1d84cb..f6012f2a53 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1664,7 +1664,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1672,6 +1672,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1695,7 +1698,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1715,7 +1718,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1782,6 +1798,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 18675b2345..ea7ca41d82 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -194,14 +194,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -266,6 +270,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:00:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430203.681609 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ6Q-0002bt-Se; Tue, 25 Oct 2022 20:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430203.681609; Tue, 25 Oct 2022 20:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ6Q-0002bl-Pt; Tue, 25 Oct 2022 20:00:46 +0000
Received: by outflank-mailman (input) for mailman id 430203;
 Tue, 25 Oct 2022 20:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6P-0002bc-Jf
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6P-0002eE-It
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6P-00068U-ID
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ln5bE88Cp/uN+QRUreZrWbo+nJWtLM+BRbNAeHUZzis=; b=JY2CXDFi2eGKEw80FbOp1/lrK5
	ymgysceVMSHAY4z4z5f6CmQRNz5dj2KyOo/uEz5jFBxMY6Bp9p5Wn9nAIYiCaS8JVZQ1ERcCwgcWv
	0wOdd+dE+9HXp48GeF8vPYvKZNGsmn3e+plQaMI434iDB56vHMwL809K6Y6NkrO7rG7Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] arm/p2m: Rework p2m_init()
Message-Id: <E1onQ6P-00068U-ID@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:00:45 +0000

commit f25c377285d155d7d88cb0e4efad58f7fd8c9d4b
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:21:11 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:59:32 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 13b06c0fe4..2642d2748c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1699,7 +1699,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1708,11 +1708,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1728,8 +1723,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1742,13 +1735,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:00:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430204.681613 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ6a-0002fQ-U9; Tue, 25 Oct 2022 20:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430204.681613; Tue, 25 Oct 2022 20:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQ6a-0002fH-RP; Tue, 25 Oct 2022 20:00:56 +0000
Received: by outflank-mailman (input) for mailman id 430204;
 Tue, 25 Oct 2022 20:00:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6Z-0002f8-My
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6Z-0002ef-M7
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQ6Z-00069e-L8
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:00:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=PB9WL6I3Xb7xgZFBWHGAxcXjliluuzdbU3lfsVl5k30=; b=sv/6QjTB7bl0r/33QXeoxBRxUA
	kU4yFs2bxxdpGUK7UeYw0d/Kh4tKmtbwprXVhD5zSY2/cqYbf63oCAYMZVLdslNgDGY4u+xyLLaqE
	k6jOGng4D569idOY+aptvb7cbb5SPqG6YqBsgP1puxMGpepMsJeq88ehA9NoQIA2it7I=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.14] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1onQ6Z-00069e-L8@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:00:55 +0000

commit 96220aec3e72b9d71600d78958b60e77db753b94
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:21:12 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:59:33 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index aae615f7d6..0fa1c0cb80 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1032,7 +1032,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2642d2748c..3eb6f16b30 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1630,7 +1630,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1638,6 +1638,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1661,7 +1664,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1681,7 +1684,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1748,6 +1764,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index b733f55d48..ac4edb95ce 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -185,14 +185,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -257,6 +261,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:22:09 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:22:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430221.681650 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQR4-0006hi-91; Tue, 25 Oct 2022 20:22:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430221.681650; Tue, 25 Oct 2022 20:22:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQR4-0006hY-5v; Tue, 25 Oct 2022 20:22:06 +0000
Received: by outflank-mailman (input) for mailman id 430221;
 Tue, 25 Oct 2022 20:22:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQR2-0006hS-LZ
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:22:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQR2-00031m-Jh
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:22:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQR2-0007Br-Id
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:22:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=anN1lmVdeXE3ddRI05bAjZlsjrV+k7OXeXdBtXY4cmU=; b=t7JLtQ4AxLa9bmzCcaLX1V7rzm
	V3kSYWrWZPLBKHU1QwA5MWUIOz/aJvO+OpFOO9XgcceCyQLryYCwMoP3vCbb3FoTYlJI66xOgN3+5
	MzTrYzO58K4DN0bCN+kU6ISB6Bwj/OidCpMyP0rnKhGfJwHTb9D6pLSdTOM6zu3vdPLY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] arm/p2m: Rework p2m_init()
Message-Id: <E1onQR2-0007Br-Id@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:22:04 +0000

commit 3954468f3af2525dbe1031d5711bad8656802d3c
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:19:36 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 21:09:25 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 3196690544..fa6d0a83e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1698,7 +1698,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1707,11 +1707,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1727,8 +1722,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1741,13 +1734,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 20:22:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 20:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430222.681654 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQRE-0006ji-AC; Tue, 25 Oct 2022 20:22:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430222.681654; Tue, 25 Oct 2022 20:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onQRE-0006ja-7S; Tue, 25 Oct 2022 20:22:16 +0000
Received: by outflank-mailman (input) for mailman id 430222;
 Tue, 25 Oct 2022 20:22:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQRC-0006jK-Nl
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:22:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQRC-00031u-Mx
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:22:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onQRC-0007CJ-Lt
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 20:22:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=IT8q6InrMII69xYu1Q8C1txHZryIgWgerodNQvDx57o=; b=0/BNO/FKWOyAGbtOFfW8X0+dFq
	BItVDbcsaUqt0typv18Csxp7wk6VFORUahjFIUqZK1KAXQyeSXz4rHRiKrWX0G872CL+6IiCMeejy
	A+StwEJ+aq3WFsfG0ZZhaEbx+h/mng9Ih1VXGPuafUH2mjLACYQVgjqdVI16tf0giDRc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.13] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1onQRC-0007CJ-Lt@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 20:22:14 +0000

commit 5b668634a9feb68e7a27339f25591b019d0923c3
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:19:37 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 21:09:58 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31abe7d6f9..98395173db 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1018,7 +1018,7 @@ int domain_relinquish_resources(struct domain *d)
         /* Fallthrough */
 
     case RELMEM_p2m:
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index fa6d0a83e9..ae0c8d23d4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1629,7 +1629,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1637,6 +1637,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1660,7 +1663,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1680,7 +1683,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1747,6 +1763,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index b1c9b947bb..45d535830f 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -173,14 +173,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -245,6 +249,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 23:00:12 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 23:00:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430246.681699 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onStx-00084l-Po; Tue, 25 Oct 2022 23:00:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430246.681699; Tue, 25 Oct 2022 23:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onStx-00084R-Mj; Tue, 25 Oct 2022 23:00:05 +0000
Received: by outflank-mailman (input) for mailman id 430246;
 Tue, 25 Oct 2022 23:00:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onStw-0007vf-TR
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 23:00:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onStw-0005b9-Ro
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 23:00:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onStw-0005vk-Qe
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 23:00:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=kigGJdIsN4SFpIndH787MGK6RoaXMf3CaRQnMph9eVI=; b=jCzEoyxrFSCAwKgK8zajulSo6x
	VT6Z6jHCITVLI00NVeH5A/GCbsCoA01qGHG6HRpv9XNrPiQowTqrQ/FI8rorgdPjGGfPTR+zbVrFJ
	+L0BQXWq5HqYKOQ+8xcf4TGPtLNanDsvCloTYpnF0pYkxtR4gJmL6e6CUqws0OrFCQQU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] automation: Explicitly enable NULL scheduler for boot-cpupools test
Message-Id: <E1onStw-0005vk-Qe@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 23:00:04 +0000

commit aef07fd1868455e572b46b3e88e2679414b07214
Author:     Michal Orzel <michal.orzel@amd.com>
AuthorDate: Mon Oct 24 14:04:43 2022 +0200
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Tue Oct 25 15:40:46 2022 -0700

    automation: Explicitly enable NULL scheduler for boot-cpupools test
    
    NULL scheduler is not enabled by default on non-debug Xen builds. This
    causes the boot time cpupools test to fail on such build jobs. Fix the issue
    by explicitly specifying the config options required to enable the NULL
    scheduler.
    
    Fixes: 36e3f4158778 ("automation: Add a new job for testing boot time cpupools on arm64")
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 automation/gitlab-ci/build.yaml | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index ddc2234faf..716ee0b1e4 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -582,6 +582,9 @@ alpine-3.12-gcc-arm64-boot-cpupools:
   variables:
     CONTAINER: alpine:3.12-arm64v8
     EXTRA_XEN_CONFIG: |
+      CONFIG_EXPERT=y
+      CONFIG_UNSUPPORTED=y
+      CONFIG_SCHED_NULL=y
       CONFIG_BOOT_TIME_CPUPOOLS=y
 
 ## Test artifacts common
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Tue Oct 25 23:00:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Oct 2022 23:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430248.681702 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onSu7-0008Op-Sh; Tue, 25 Oct 2022 23:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430248.681702; Tue, 25 Oct 2022 23:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onSu7-0008Oh-Pv; Tue, 25 Oct 2022 23:00:15 +0000
Received: by outflank-mailman (input) for mailman id 430248;
 Tue, 25 Oct 2022 23:00:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onSu6-0008OZ-W2
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 23:00:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onSu6-0005bE-Ur
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 23:00:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onSu6-0005wr-Tv
 for xen-changelog@lists.xenproject.org; Tue, 25 Oct 2022 23:00:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4Qe0zUFxGsp/k640T2ylBtN4FqRMURyho3sE5nQvPsc=; b=wUTA27/m5OlF3mgXkmghnPXRmZ
	fA8479GPG6rEAsQ+Op/2uwEy3CAf294cHdoZqyQ3zNsNIwRzSMGAcNj4bWwufx1eNIE4L3O5cr+5s
	9na0r4zVmlUUydinE0DC1SFtnS7fHvF/TSvf85j0u7ksaWWxVBrBD3MERsdE9hwAGZW0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] automation: Build Xen according to the type of the job
Message-Id: <E1onSu6-0005wr-Tv@xenbits.xenproject.org>
Date: Tue, 25 Oct 2022 23:00:14 +0000

commit ef9cc669ba157f9e71fd79722ee43892e7304604
Author:     Michal Orzel <michal.orzel@amd.com>
AuthorDate: Fri Oct 21 15:22:38 2022 +0200
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Tue Oct 25 15:41:30 2022 -0700

    automation: Build Xen according to the type of the job
    
    All the build jobs exist in two flavors: debug and non-debug, where the
    former sets 'debug' variable to 'y' and the latter to 'n'. This variable
    is only being recognized by the toolstack, because Xen requires
    enabling/disabling debug build via e.g. menuconfig/config file.
    As a corollary, we end up building/testing Xen with CONFIG_DEBUG always
    set to a default value ('y' for unstable and 'n' for stable branches),
    regardless of the type of the build job.
    
    Fix this behavior by setting CONFIG_DEBUG according to the 'debug' value.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 automation/scripts/build | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index 8c0882f3aa..a593419063 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -21,12 +21,13 @@ if [[ "${RANDCONFIG}" == "y" ]]; then
     make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
     hypervisor_only="y"
 else
+    echo "CONFIG_DEBUG=${debug}" > xen/.config
+
     if [[ -n "${EXTRA_XEN_CONFIG}" ]]; then
-        echo "${EXTRA_XEN_CONFIG}" > xen/.config
-        make -j$(nproc) -C xen olddefconfig
-    else
-        make -j$(nproc) -C xen defconfig
+        echo "${EXTRA_XEN_CONFIG}" >> xen/.config
     fi
+
+    make -j$(nproc) -C xen olddefconfig
 fi
 
 # Save the config file before building because build failure causes the script
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Wed Oct 26 04:33:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Oct 2022 04:33:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430338.681911 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onY6B-0002PQ-2j; Wed, 26 Oct 2022 04:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430338.681911; Wed, 26 Oct 2022 04:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onY6A-0002PE-Vi; Wed, 26 Oct 2022 04:33:02 +0000
Received: by outflank-mailman (input) for mailman id 430338;
 Wed, 26 Oct 2022 04:33:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onY69-0002P8-UP
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 04:33:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onY69-0002Ef-TY
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 04:33:01 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onY69-0003wn-SO
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 04:33:01 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=UtS1K6unAp9nq+hnxgi8t3qU9TMfrbIV8Mx1JcI0+iM=; b=iGTG9JOJMSL+xrE5PR2+gEhU5M
	oqIUjbTKAujUuEUXcgB/BwyZeyfY0wZIL1PxKr6eTrR/Imsh4LpJF5mTb9uTZNR+HUidCWVmtjq3d
	1rl4MaK6/DgwVeGgInI9rz9KkYgkttiSb76Ot4xopycJwm95NwTeayQlUuZaWzJN2NZA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] xen/sched: fix restore_vcpu_affinity() by removing it
Message-Id: <E1onY69-0003wn-SO@xenbits.xenproject.org>
Date: Wed, 26 Oct 2022 04:33:01 +0000

commit fce1f381f7388daaa3e96dbb0d67d7a3e4bb2d2d
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Fri Oct 21 12:50:26 2022 +0200
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Mon Oct 24 11:16:27 2022 +0100

    xen/sched: fix restore_vcpu_affinity() by removing it
    
    When the system is coming up after having been suspended,
    restore_vcpu_affinity() is called for each domain in order to adjust
    the vcpu's affinity settings in case a cpu didn't come to live again.
    
    The way restore_vcpu_affinity() is doing that is wrong, because the
    specific scheduler isn't being informed about a possible migration of
    the vcpu to another cpu. Additionally the migration is often even
    happening if all cpus are running again, as it is done without check
    whether it is really needed.
    
    As cpupool management is already calling cpu_disable_scheduler() for
    cpus not having come up again, and cpu_disable_scheduler() is taking
    care of eventually needed vcpu migration in the proper way, there is
    simply no need for restore_vcpu_affinity().
    
    So just remove restore_vcpu_affinity() completely, together with the
    no longer used sched_reset_affinity_broken().
    
    Fixes: 8a04eaa8ea83 ("xen/sched: move some per-vcpu items to struct sched_unit")
    Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/acpi/power.c |  3 --
 xen/common/sched/core.c   | 78 -----------------------------------------------
 xen/include/xen/sched.h   |  1 -
 3 files changed, 82 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 1bb4d78392..b76f673acb 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -159,10 +159,7 @@ static void thaw_domains(void)
 
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
-    {
-        restore_vcpu_affinity(d);
         domain_unpause(d);
-    }
     rcu_read_unlock(&domlist_read_lock);
 }
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 83455fbde1..23fa6845a8 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1188,84 +1188,6 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit)
     return false;
 }
 
-static void sched_reset_affinity_broken(const struct sched_unit *unit)
-{
-    struct vcpu *v;
-
-    for_each_sched_unit_vcpu ( unit, v )
-        v->affinity_broken = false;
-}
-
-void restore_vcpu_affinity(struct domain *d)
-{
-    unsigned int cpu = smp_processor_id();
-    struct sched_unit *unit;
-
-    ASSERT(system_state == SYS_STATE_resume);
-
-    rcu_read_lock(&sched_res_rculock);
-
-    for_each_sched_unit ( d, unit )
-    {
-        spinlock_t *lock;
-        unsigned int old_cpu = sched_unit_master(unit);
-        struct sched_resource *res;
-
-        ASSERT(!unit_runnable(unit));
-
-        /*
-         * Re-assign the initial processor as after resume we have no
-         * guarantee the old processor has come back to life again.
-         *
-         * Therefore, here, before actually unpausing the domains, we should
-         * set v->processor of each of their vCPUs to something that will
-         * make sense for the scheduler of the cpupool in which they are in.
-         */
-        lock = unit_schedule_lock_irq(unit);
-
-        cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                    cpupool_domain_master_cpumask(d));
-        if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-        {
-            if ( sched_check_affinity_broken(unit) )
-            {
-                sched_set_affinity(unit, unit->cpu_hard_affinity_saved, NULL);
-                sched_reset_affinity_broken(unit);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-
-            if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-            {
-                /* Affinity settings of one vcpu are for the complete unit. */
-                printk(XENLOG_DEBUG "Breaking affinity for %pv\n",
-                       unit->vcpu_list);
-                sched_set_affinity(unit, &cpumask_all, NULL);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-        }
-
-        res = get_sched_res(cpumask_any(cpumask_scratch_cpu(cpu)));
-        sched_set_res(unit, res);
-
-        spin_unlock_irq(lock);
-
-        /* v->processor might have changed, so reacquire the lock. */
-        lock = unit_schedule_lock_irq(unit);
-        res = sched_pick_resource(unit_scheduler(unit), unit);
-        sched_set_res(unit, res);
-        spin_unlock_irq(lock);
-
-        if ( old_cpu != sched_unit_master(unit) )
-            sched_move_irqs(unit);
-    }
-
-    rcu_read_unlock(&sched_res_rculock);
-
-    domain_update_node_affinity(d);
-}
-
 /*
  * This function is used by cpu_hotplug code via cpu notifier chain
  * and from cpupools to switch schedulers on a cpu.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 557b3229f6..072e4846aa 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1019,7 +1019,6 @@ void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Wed Oct 26 13:00:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Oct 2022 13:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430556.682364 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ong0s-0000xH-IS; Wed, 26 Oct 2022 13:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430556.682364; Wed, 26 Oct 2022 13:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ong0s-0000x9-El; Wed, 26 Oct 2022 13:00:06 +0000
Received: by outflank-mailman (input) for mailman id 430556;
 Wed, 26 Oct 2022 13:00:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong0r-0000ov-BW
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong0r-000375-8G
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong0r-0003tV-6p
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Akhdpg4hzFaGCMwkNeXlDiXmeZZyu2L29i2O0tUT+Pg=; b=HsE57dlpf/dRqQIJsAf8QcQ9uD
	KdIehpUYf0E26QFaezt0JWzNULRlY39Kh/sRjgNGiBVBC/Dan8Ga8ykGNb9EZYhvFhSKkG3p8suHo
	9+Qn0FBdwbCO28x96Zuf3r7u4RXNXZB0bsXa9M2DOleOvL/qXHMXKM/izOU+6BCfvEiE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] vpci: don't assume that vpci per-device data exists unconditionally
Message-Id: <E1ong0r-0003tV-6p@xenbits.xenproject.org>
Date: Wed, 26 Oct 2022 13:00:05 +0000

commit 6ccb5e308ceeb895fbccd87a528a8bd24325aa39
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Wed Oct 26 14:55:30 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 26 14:55:30 2022 +0200

    vpci: don't assume that vpci per-device data exists unconditionally
    
    It's possible for a device to be assigned to a domain but have no
    vpci structure if vpci_process_pending() failed and called
    vpci_remove_device() as a result.  The unconditional accesses done by
    vpci_{read,write}() and vpci_remove_device() to pdev->vpci would
    then trigger a NULL pointer dereference.
    
    Add checks for pdev->vpci presence in the affected functions.
    
    Fixes: 9c244fdef7 ('vpci: add header handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/vpci.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 3467c0de86..647f7af679 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -37,7 +37,7 @@ extern vpci_register_init_t *const __end_vpci_array[];
 
 void vpci_remove_device(struct pci_dev *pdev)
 {
-    if ( !has_vpci(pdev->domain) )
+    if ( !has_vpci(pdev->domain) || !pdev->vpci )
         return;
 
     spin_lock(&pdev->vpci->lock);
@@ -326,7 +326,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
 
     /* Find the PCI dev matching the address. */
     pdev = pci_get_pdev(d, sbdf);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
         return vpci_read_hw(sbdf, reg, size);
 
     spin_lock(&pdev->vpci->lock);
@@ -436,7 +436,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
      * Passthrough everything that's not trapped.
      */
     pdev = pci_get_pdev(d, sbdf);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
     {
         vpci_write_hw(sbdf, reg, size, data);
         return;
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Wed Oct 26 13:00:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Oct 2022 13:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430557.682368 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ong12-00014A-JP; Wed, 26 Oct 2022 13:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430557.682368; Wed, 26 Oct 2022 13:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ong12-00013y-Ga; Wed, 26 Oct 2022 13:00:16 +0000
Received: by outflank-mailman (input) for mailman id 430557;
 Wed, 26 Oct 2022 13:00:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong11-00013o-D6
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong11-00037D-CE
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong11-0003v5-AP
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ovDpx4oCyaSEYXl++mcmZFTr3+dAyx40x5I01JHSMhA=; b=KItyeoXmjgXFm/Q5ZQ34D5QNhd
	mKr7dUqo9yXviWys+zYxtiKxnKpjK+kHKn80BOoeYgcvqr8WpG1T2OhWM9VAF4iacmWQyewe6+bM7
	8ILQhrzvqB5t2y1wKQbvzqbUjKmqltfe0OuHve0XoIl23YapuDEPq+HucKi73ru+fhMo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] vpci/msix: remove from table list on detach
Message-Id: <E1ong11-0003v5-AP@xenbits.xenproject.org>
Date: Wed, 26 Oct 2022 13:00:15 +0000

commit c14aea137eab29eb9c30bfad745a00c65ad21066
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Wed Oct 26 14:56:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 26 14:56:58 2022 +0200

    vpci/msix: remove from table list on detach
    
    Teardown of MSIX vPCI related data doesn't currently remove the MSIX
    device data from the list of MSIX tables handled by the domain,
    leading to a use-after-free of the data in the msix structure.
    
    Remove the structure from the list before freeing in order to solve
    it.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Fixes: d6281be9d0 ('vpci/msix: add MSI-X handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/vpci.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 647f7af679..98198dc2c9 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -51,8 +51,12 @@ void vpci_remove_device(struct pci_dev *pdev)
         xfree(r);
     }
     spin_unlock(&pdev->vpci->lock);
-    if ( pdev->vpci->msix && pdev->vpci->msix->pba )
-        iounmap(pdev->vpci->msix->pba);
+    if ( pdev->vpci->msix )
+    {
+        list_del(&pdev->vpci->msix->next);
+        if ( pdev->vpci->msix->pba )
+            iounmap(pdev->vpci->msix->pba);
+    }
     xfree(pdev->vpci->msix);
     xfree(pdev->vpci->msi);
     xfree(pdev->vpci);
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Wed Oct 26 13:00:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Oct 2022 13:00:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430558.682372 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ong1C-000179-Kv; Wed, 26 Oct 2022 13:00:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430558.682372; Wed, 26 Oct 2022 13:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ong1C-000172-IB; Wed, 26 Oct 2022 13:00:26 +0000
Received: by outflank-mailman (input) for mailman id 430558;
 Wed, 26 Oct 2022 13:00:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong1B-00016m-GB
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong1B-00037S-FL
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ong1B-0003vt-EO
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 13:00:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=2xmjsR9KpwCzRF1lDlsOWvQwUBx+BX4MK0KON5AnDnU=; b=oLWp9kgy1flzCLfmzPveG066V/
	Ou8Whjm9ymtFpgxTyqNr7JsMQV/mnz9wFau50EFA/CA96gWEOBoY5oBDKIaRZNSL6aSX0h8CGGaTx
	kItTYKyCMkYZZ8aL3Lkfxmmj06fHBVFa7zRK5IffJrlHDqR6ayD3JEpzAq/rH0Tpzbh8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] vpci: introduce a local vpci_bar variable to modify_decoding()
Message-Id: <E1ong1B-0003vt-EO@xenbits.xenproject.org>
Date: Wed, 26 Oct 2022 13:00:25 +0000

commit 26bf76b48bbce3e7b126290374c64966dca47561
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Wed Oct 26 14:57:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 26 14:57:41 2022 +0200

    vpci: introduce a local vpci_bar variable to modify_decoding()
    
    This is done to shorten line length in the function in preparation for
    adding further usages of the vpci_bar data structure.
    
    No functional change.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/header.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index a1c928a0d2..eb9219a49a 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -103,24 +103,26 @@ static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
 
     for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
     {
-        if ( !MAPPABLE_BAR(&header->bars[i]) )
+        struct vpci_bar *bar = &header->bars[i];
+
+        if ( !MAPPABLE_BAR(bar) )
             continue;
 
-        if ( rom_only && header->bars[i].type == VPCI_BAR_ROM )
+        if ( rom_only && bar->type == VPCI_BAR_ROM )
         {
             unsigned int rom_pos = (i == PCI_HEADER_NORMAL_NR_BARS)
                                    ? PCI_ROM_ADDRESS : PCI_ROM_ADDRESS1;
-            uint32_t val = header->bars[i].addr |
+            uint32_t val = bar->addr |
                            (map ? PCI_ROM_ADDRESS_ENABLE : 0);
 
-            header->bars[i].enabled = header->rom_enabled = map;
+            bar->enabled = header->rom_enabled = map;
             pci_conf_write32(pdev->sbdf, rom_pos, val);
             return;
         }
 
         if ( !rom_only &&
-             (header->bars[i].type != VPCI_BAR_ROM || header->rom_enabled) )
-            header->bars[i].enabled = map;
+             (bar->type != VPCI_BAR_ROM || header->rom_enabled) )
+            bar->enabled = map;
     }
 
     if ( !rom_only )
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Wed Oct 26 19:22:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Oct 2022 19:22:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430751.682781 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onlyX-00023x-Q9; Wed, 26 Oct 2022 19:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430751.682781; Wed, 26 Oct 2022 19:22:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onlyX-00023p-NU; Wed, 26 Oct 2022 19:22:05 +0000
Received: by outflank-mailman (input) for mailman id 430751;
 Wed, 26 Oct 2022 19:22:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onlyW-00023j-Sf
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 19:22:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onlyW-0001hu-ON
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 19:22:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onlyW-0000kt-Mc
 for xen-changelog@lists.xenproject.org; Wed, 26 Oct 2022 19:22:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=W0DCQHjuglPzBqPZB0eo0ZaN81NqyInBDA6treLLBK0=; b=nQnVbfvQ+3b2mhUF9P8e4pMZvu
	Xz0q984eOTX8ET51BVL1Ap2o3Vwsi58EoSDEtSUW12t1afrIHJ2CqZEJe1MDWt7C0ghVTg1ENz76d
	Po9OAwBDh4vfLmLTmga4IyqiONwPgwZp1o/xFewZg5BBP5bm8AALC+vbAXqcVtBZo9mU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] CI: Drop more TravisCI remnants
Message-Id: <E1onlyW-0000kt-Mc@xenbits.xenproject.org>
Date: Wed, 26 Oct 2022 19:22:04 +0000

commit bad4832710c7261fad1abe2d0e8e2e1d259b3e8d
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Wed Oct 26 13:39:06 2022 +0100
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Wed Oct 26 12:18:54 2022 -0700

    CI: Drop more TravisCI remnants
    
    This was missed from previous attempts to remove Travis.
    
    Fixes: e0dc9b095e7c ("CI: Drop TravisCI")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 MAINTAINERS          |  1 -
 scripts/travis-build | 32 --------------------------------
 2 files changed, 33 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 816656950a..175f10f33f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -274,7 +274,6 @@ W:	https://gitlab.com/xen-project/xen
 S:	Supported
 F:	.gitlab-ci.yml
 F:	automation/
-F:	scripts/travis-build
 
 CPU POOLS
 M:	Juergen Gross <jgross@suse.com>
diff --git a/scripts/travis-build b/scripts/travis-build
deleted file mode 100755
index 84d74266a0..0000000000
--- a/scripts/travis-build
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash -ex
-
-$CC --version
-
-# random config or default config
-if [[ "${RANDCONFIG}" == "y" ]]; then
-    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
-else
-    make -C xen defconfig
-fi
-
-# build up our configure options
-cfgargs=()
-cfgargs+=("--disable-stubdom") # more work needed into building this
-cfgargs+=("--disable-rombios")
-cfgargs+=("--enable-docs")
-cfgargs+=("--with-system-seabios=/usr/share/seabios/bios.bin")
-
-# Qemu requires Python 3.5 or later
-if ! type python3 || python3 -c "import sys; res = sys.version_info < (3, 5); exit(not(res))"; then
-    cfgargs+=("--with-system-qemu=/bin/false")
-fi
-
-if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
-    cfgargs+=("--enable-tools")
-else
-    cfgargs+=("--disable-tools") # we don't have the cross depends installed
-fi
-
-./configure "${cfgargs[@]}"
-
-make dist
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:33:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430837.682977 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontdg-0001ZQ-GJ; Thu, 27 Oct 2022 03:33:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430837.682977; Thu, 27 Oct 2022 03:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontdg-0001ZH-Cr; Thu, 27 Oct 2022 03:33:04 +0000
Received: by outflank-mailman (input) for mailman id 430837;
 Thu, 27 Oct 2022 03:33:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontdf-0001ZB-5d
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontde-0000hQ-EB
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontde-0004mI-BU
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ppkHAfwUtSqUQ7EdD+0IT0e0Zbi0+HgXZao0t+QbE6E=; b=l2bEKFnr2h6MQZ5pPu1r7lwwEL
	aFMs0ZwyGndOx6scRrhKXe8D1LNHioyZU4px5hBR0g5fK21W8+aciG7BxlWvcao7b/Eiwy1E+kwBD
	mSwYBcYE6x2xm+3nUSoqIFRdqdXG/0lt1dG6qRp07UhAVg3sf0ZpBpNxYLvaWVv21H+s=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1ontde-0004mI-BU@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:33:02 +0000

commit 09fc590c15773c2471946a78740c6b02e8c34a45
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:05:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:05:53 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2ddd06801a..8398251c51 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1093,6 +1093,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1613,6 +1622,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:33:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430838.682981 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontdp-0001ar-HN; Thu, 27 Oct 2022 03:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430838.682981; Thu, 27 Oct 2022 03:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontdp-0001ai-EY; Thu, 27 Oct 2022 03:33:13 +0000
Received: by outflank-mailman (input) for mailman id 430838;
 Thu, 27 Oct 2022 03:33:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontdo-0001ac-IB
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontdo-0000hk-HO
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontdo-0004ml-Ga
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=KfMQE4K86kbcP+343rV3O+C0bxirXSwvaMk8Qk0y8Ro=; b=OkrrvQIFjOlR/EeEoxZbA/S++O
	mFw8115z/WJQgxtPzUR/BNHn00U0R+JNd6dv+v0ddGE8vhrRWJnlETiC5BUaCVqspyl86SLAQ3ZdJ
	LdxeWXzXqZXA1/I9ErKA5aXuDBSDMMnTaykPtQGYDr3/eLG1lyTS2SnowugN5OFEaRt4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1ontdo-0004ml-Ga@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:33:12 +0000

commit 0d805f9fba4bc155d15047685024f7d842e925e4
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:06:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:06:36 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c     | 10 ++++++++--
 xen/arch/arm/p2m.c        | 47 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/p2m.h | 13 +++++++++++--
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5eaf4c718e..223ec9694d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -779,10 +779,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -984,6 +984,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1038,6 +1039,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8398251c51..4ad3e0606e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1530,17 +1530,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 6a2108398f..3a2d51b35d 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -192,8 +192,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:33:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:33:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430839.682984 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontdz-0001dD-Iw; Thu, 27 Oct 2022 03:33:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430839.682984; Thu, 27 Oct 2022 03:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontdz-0001d6-GF; Thu, 27 Oct 2022 03:33:23 +0000
Received: by outflank-mailman (input) for mailman id 430839;
 Thu, 27 Oct 2022 03:33:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontdy-0001cu-LX
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontdy-0000i3-Kp
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontdy-0004nG-Js
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=oMIMcB0mPdPJ58mJDUeReLmJy75u3JgLtL4aWrApj/k=; b=Qo+UEWe6ExIlgS0IDnO48BK1O4
	QgdwabcIc4THBVcnc8n/LkvSQVQAIrj3yKujLCRLxoRQ9MYUZ47KjyYQTRB6HMhc2r6RP278Larf2
	nHBcMqlgFK4BvuE+F46nWWzlgN5+ldPAYjjSo99ue5E3t+7p+vqSdwhoN+wu3H/g+hzQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1ontdy-0004nG-Js@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:33:22 +0000

commit 0f3eab90f327210d91e8e31a769376f286e8819a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:07:25 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:07:25 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 47a7487fa7..a8f5a19da9 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -541,18 +541,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 85681dee26..8ba73082c1 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -741,11 +741,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -755,10 +755,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 4a8882430b..abe6d43343 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2768,7 +2768,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2933,7 +2933,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 46e8b94a49..46eb51d44c 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -619,7 +619,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:33:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430840.682988 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onte9-0001gf-Mq; Thu, 27 Oct 2022 03:33:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430840.682988; Thu, 27 Oct 2022 03:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onte9-0001gX-JY; Thu, 27 Oct 2022 03:33:33 +0000
Received: by outflank-mailman (input) for mailman id 430840;
 Thu, 27 Oct 2022 03:33:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onte8-0001gI-P2
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onte8-0000iE-OF
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onte8-0004nj-N7
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=7WwU/uHICnsxSGloStr7UAFMpRbFJo6tAwljkt/lA58=; b=rMEagfzp2vu0S/tAb0KzVqZiQM
	Hha/HTf82a4baxdiq6mTY8JVFPV3H/6wW27L89tTDUPtJe6RwTb+vFCtWZfn2mJ3+3km4MMmgtxag
	+oPM8k89af40E+UvPJDeOVoSue6QgABjeDN1FD35gJABQe6XNT7+hwm8S5c8L82ahbbU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/HAP: adjust monitor table related error handling
Message-Id: <E1onte8-0004nj-N7@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:33:32 +0000

commit d24a10a91d46a56e1d406239643ec651a31033d4
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:07:42 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:07:42 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a8f5a19da9..d75dc2b9ed 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -766,6 +772,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -774,6 +783,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:33:43 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:33:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430841.682994 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onteJ-0001jz-Oa; Thu, 27 Oct 2022 03:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430841.682994; Thu, 27 Oct 2022 03:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onteJ-0001jn-LQ; Thu, 27 Oct 2022 03:33:43 +0000
Received: by outflank-mailman (input) for mailman id 430841;
 Thu, 27 Oct 2022 03:33:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onteI-0001jf-T8
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onteI-0000iW-SP
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onteI-0004o9-Qx
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=MKrm66ViTdOlwhDZYuRcePWvRgeMOoqtxE9RUyabYFQ=; b=r8uRhx25AuC2+8z3xoL2dVMYyz
	USsCLqbcaMWak2p1/UXaco1ZxXwrlgjct3Myx/REZBMFjtRhhUo4ErhZG9l6qDlipio7rz9ID0/7R
	E1zaaNYstKO5nb9WP+jPyrRePkROm+P6R4FWC6ZBo2xaobIukJRFeBf48tivauhDoqfY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1onteI-0004o9-Qx@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:33:42 +0000

commit 95f6d555ec84383f7daaf3374f65bec5ff4351f5
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:07:57 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:07:57 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/common.c |  1 +
 xen/arch/x86/mm/shadow/multi.c  | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index abe6d43343..0ab2ac6b7a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2583,6 +2583,7 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 9b43cb116c..7e0494cf7f 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3697,6 +3697,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
@@ -3757,6 +3762,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #else
 #error This should never happen
 #endif
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:33:53 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:33:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430842.682999 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onteT-0001mk-RQ; Thu, 27 Oct 2022 03:33:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430842.682999; Thu, 27 Oct 2022 03:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onteT-0001mc-N3; Thu, 27 Oct 2022 03:33:53 +0000
Received: by outflank-mailman (input) for mailman id 430842;
 Thu, 27 Oct 2022 03:33:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onteT-0001mS-0T
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onteS-0000ig-Vv
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:52 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onteS-0004q5-V4
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:33:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=63XsY5GVeQ2Fm/ssGpm+pARUR8yQh1DXZTuv/NpcF5o=; b=gu/C0l3u5j6fNJcC/pbqGjtWdL
	pB7wt0PzD6ZbJYqHELS1qfMIK2b6PHo7R+gF97ornr7HzmEER7MaR/crL9X3ux+zuc+WW1+4zI2Tb
	3AH5GwPkO7EXSwIDqjinhBwxRVXPYjcAE9T6v8TV+EWO/7f5WA7fxPKpICE+9dyOYH88=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1onteS-0004q5-V4@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:33:52 +0000

commit 1e26afa846fb9a00b9155280eeae3b8cb8375dd6
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:14 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:14 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 69 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/hvm.c     |  4 ++-
 xen/arch/x86/mm/shadow/multi.c   | 11 +++++--
 xen/arch/x86/mm/shadow/private.h |  3 +-
 4 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0ab2ac6b7a..fc4f7f78ce 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -927,14 +928,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -950,7 +952,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -973,7 +976,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -986,7 +989,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -994,9 +1002,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1215,7 +1233,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1233,16 +1251,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1333,7 +1353,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2401,12 +2423,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2470,6 +2493,9 @@ static void sh_update_paging_modes(struct vcpu *v)
             mfn_t mmfn = sh_make_monitor_table(
                              v, v->arch.paging.mode->shadow.shadow_levels);
 
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2508,6 +2534,12 @@ static void sh_update_paging_modes(struct vcpu *v)
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = sh_make_monitor_table(
                               v, v->arch.paging.mode->shadow.shadow_levels);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    sh_destroy_monitor_table(v, old_mfn,
+                                             old_mode->shadow.shadow_levels);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
@@ -2593,7 +2625,12 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index 87fc57704f..d68796c495 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -700,7 +700,9 @@ mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels)
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
+
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
     mfn_to_page(m4mfn)->shadow_flags = 4;
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 7e0494cf7f..6a9f82d39c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2825,9 +2825,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 911db46e73..3fe0388e7c 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -351,7 +351,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:34:03 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:34:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430843.683000 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onted-0001qD-RT; Thu, 27 Oct 2022 03:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430843.683000; Thu, 27 Oct 2022 03:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onted-0001q4-Og; Thu, 27 Oct 2022 03:34:03 +0000
Received: by outflank-mailman (input) for mailman id 430843;
 Thu, 27 Oct 2022 03:34:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onted-0001pw-3w
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onted-0000j5-34
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onted-0004qj-29
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=77iAPOxf8iFaRZ1wiuYAgrAV/cwMGhhHTMUIYOF0e5c=; b=w69hFe9562VvYuBuTVhhnJa9FO
	QrH93mjHmy7td5XtlqapnTDSQyris1KlfQhEaEgWOE1IyR3ZXDUUEr/52VEpHZqDVX+jRXoq1dXPJ
	PqwzanrUzOEAhgQjnE3YMRwMXFdHp8K5IYEfCJBblOEsBhsdlzsxQXKiXC0UQRtbnQhI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1onted-0004qj-29@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:34:03 +0000

commit 4f9b535194f70582863f2a78f113547d8822b2b9
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:28 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:28 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d75dc2b9ed..787991233e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -245,6 +245,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -281,7 +284,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index fc4f7f78ce..9ad7e5a886 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -938,6 +938,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -990,7 +994,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1004,10 +1008,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1235,6 +1242,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:34:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:34:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430844.683005 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onten-0001tI-TX; Thu, 27 Oct 2022 03:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430844.683005; Thu, 27 Oct 2022 03:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onten-0001tA-QO; Thu, 27 Oct 2022 03:34:13 +0000
Received: by outflank-mailman (input) for mailman id 430844;
 Thu, 27 Oct 2022 03:34:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onten-0001t2-76
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onten-0000jY-6R
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onten-0004rd-5V
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=3pg16vMQ97UsZ8tYh4+gycsjUuQrxMEC2tNKFJL1p9E=; b=z/e6HisV8cfCUirGfznV0U5Qlx
	50uDeIi9ZNUjgZpjP50LGqsikgs8E4VDg063t0b2XrqQBzHgI3MohxrKsrbWZnuTXvsLyz9Jj4E7B
	KkAwFr5Tn8UJC//DsVOVyt7dkLGLzrQ8hxp1gXZsxlzECMlx9kjrcN2FZmcod04SIyJQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1onten-0004rd-5V@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:34:13 +0000

commit 7f055b011a657f8f16b0df242301efb312058eea
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:42 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:42 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 787991233e..aef2297450 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -265,6 +265,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 9ad7e5a886..366956c146 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1184,6 +1184,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1226,11 +1227,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1300,9 +1322,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:34:24 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:34:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430845.683009 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontey-0001wb-0r; Thu, 27 Oct 2022 03:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430845.683009; Thu, 27 Oct 2022 03:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontex-0001wT-UR; Thu, 27 Oct 2022 03:34:23 +0000
Received: by outflank-mailman (input) for mailman id 430845;
 Thu, 27 Oct 2022 03:34:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontex-0001wM-AO
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontex-0000jj-9f
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontex-0004se-8u
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=8Io7oorUM0+vY4z7xapkFoshPHc22bYVIkr1wuuMZ2A=; b=AQXFln5SF2dU3S7EVl7x9DbvR1
	RXQoHQHY/doWkaN6mcSg1W20RMyh84HtkuPK726EG5IUSVj1HHwpjs+aJBGsDL6XywsxLPYUjFrCW
	5b3k9XmjemVC1ThoDRid2+91jdvIHe4Cd+Al3srRAF8+Xz0CTuolkF2qSGLdhm3DhUmg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1ontex-0004se-8u@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:34:23 +0000

commit 686c920fa9389fe2b6b619643024ed98b4b7d51f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:08:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:08:58 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 42 +++++++++++++++++++++++++----------------
 xen/arch/x86/mm/shadow/common.c | 12 ++++++++++++
 3 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2838f976d7..ce6ddcf313 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2358,12 +2357,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index aef2297450..a44fcfd95e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -546,24 +547,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -578,6 +563,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -603,6 +590,7 @@ void hap_vcpu_teardown(struct vcpu *v)
 void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -611,6 +599,28 @@ void hap_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         hap_vcpu_teardown(v);
 
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d); /* Keep various asserts happy */
 
     if ( d->arch.paging.hap.total_pages != 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 366956c146..680766fd51 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2891,8 +2891,17 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d);
 
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -3013,6 +3022,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:34:34 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:34:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430846.683013 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontf8-0001zP-2b; Thu, 27 Oct 2022 03:34:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430846.683013; Thu, 27 Oct 2022 03:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontf7-0001zF-Vx; Thu, 27 Oct 2022 03:34:33 +0000
Received: by outflank-mailman (input) for mailman id 430846;
 Thu, 27 Oct 2022 03:34:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontf7-0001z6-Dx
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontf7-0000jt-DD
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontf7-0004te-CI
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=6AKtZ4bLXDMdcpijxNE1fxkqjB03vbwsHF3kx/z3ZuA=; b=SMdKRwlbZcR8uIrcD74DZ98IoJ
	w0y1jQl+6gFD2GJFD4enAQValpXKnTB3M/m4MBt0zm3FaCPW7ImrqyFr1FVHL57gvey8GlZxautE1
	vGL4ndgV9V+wjmw/Tr+gHRMDiiZXZsFiWBCsPcWRl2uIy/fxtKlMuaxPem1gHDAc8Hqo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1ontf7-0004te-CI@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:34:33 +0000

commit b03074bb47d10c9373688b3661c7c31da01c21a3
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:09:12 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:09:12 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a44fcfd95e..1f9a157a0c 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -548,17 +548,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -612,14 +612,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d); /* Keep various asserts happy */
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 8ba73082c1..107f6778a6 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -741,12 +741,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -765,8 +766,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 680766fd51..8f7fddcee1 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2837,8 +2837,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2891,7 +2895,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -3012,7 +3018,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 46eb51d44c..edbe4cee27 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -619,7 +619,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:34:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:34:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430850.683028 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfI-0002JK-FP; Thu, 27 Oct 2022 03:34:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430850.683028; Thu, 27 Oct 2022 03:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfI-0002J9-CX; Thu, 27 Oct 2022 03:34:44 +0000
Received: by outflank-mailman (input) for mailman id 430850;
 Thu, 27 Oct 2022 03:34:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfH-0002Il-IS
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfH-0000k5-Hi
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfH-0004uP-Ff
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=QhWuzIbkcwK8h/4nuqn3LpMGxJxL/oXUBt4qt1/uqXU=; b=fs1YahKgnL7HfG/8t4ASvMocu9
	/zE7GnUE62Vs9ilPYWivbo01bb2UP4X4afLwuTzOT1htYQY9LMH3j2OUdHE6iRk1Gqenno4g/HgRU
	8bZNfyLJDSIFdMMmOwHVVh0AtTFqataEb7Lz1VttwZVLY3sgDPIOXm0he+8DjD3WCPPI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1ontfH-0004uP-Ff@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:34:43 +0000

commit 0c0680d6e7953ca4c91699e60060c732f9ead5c1
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:09:32 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:09:32 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in       |  5 +++++
 tools/libs/light/libxl_arch.h  |  4 ++++
 tools/libs/light/libxl_arm.c   | 12 ++++++++++++
 tools/libs/light/libxl_utils.c |  9 ++-------
 tools/libs/light/libxl_x86.c   | 13 +++++++++++++
 5 files changed, 36 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 56370a37db..af7fae7c52 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1746,6 +1746,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map. Users should adjust this value if bigger P2M pool size is
+needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 8527fc5c6c..6741b7f6f4 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -90,6 +90,10 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index e2901f13b7..d59b464192 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -154,6 +154,18 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index 4699c4a0a3..e276c0ee9c 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 18c3c77ccd..4d66478fe9 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -882,6 +882,19 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                     libxl_defbool_val(src->b_info.arch_x86.msr_relaxed));
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
+
 /*
  * Local variables:
  * mode: C
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:34:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:34:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430851.683032 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfS-0002Oo-Gn; Thu, 27 Oct 2022 03:34:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430851.683032; Thu, 27 Oct 2022 03:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfS-0002Og-E2; Thu, 27 Oct 2022 03:34:54 +0000
Received: by outflank-mailman (input) for mailman id 430851;
 Thu, 27 Oct 2022 03:34:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfR-0002OO-Lc
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfR-0000kC-Kn
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfR-0004vJ-K4
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:34:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=T1MUBMo0nwQ3XUFkjrUEja2ArCtb3C/Twx2/cy030i8=; b=4DnHrI9H91Du6qU/wizG9x7UmN
	YkFP17kyVCInThQjBjc4QLMbnmf4GjeW5/rQP5Ua+cuUCJB6CPtAox+3QHkBZd4HSGT/3HLbz/ilE
	lCNLRKpA9NcEzFQ8dti55+CzmjOAbXC/Z6IMx/Vs47UiRQbhCGu9ox6jkPTbHGnWgf+M=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1ontfR-0004vJ-K4@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:34:53 +0000

commit 45336d8f88725aec65ee177b1b09abf6eef1dc8d
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:09:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:09:58 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4ad3e0606e..6883d86277 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1602,7 +1688,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index bb0a6adbe0..1d8935778f 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -40,6 +40,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -51,6 +59,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 3a2d51b35d..18675b2345 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -218,6 +218,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:35:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430853.683037 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfc-0002W5-Ks; Thu, 27 Oct 2022 03:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430853.683037; Thu, 27 Oct 2022 03:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfc-0002Vs-Gw; Thu, 27 Oct 2022 03:35:04 +0000
Received: by outflank-mailman (input) for mailman id 430853;
 Thu, 27 Oct 2022 03:35:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfb-0002Vc-OU
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfb-0000kT-Nn
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfb-0004wQ-N0
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=OHr2ZS9jcxRfG4Wc7uX9XE4AcIin7zV+t/WKiER8Su8=; b=nGRFZHNa+LKQkdsE55h/jB+Kc9
	+N1xfUhYcEhANrYiLkmnY5dJMB9+1I4r5pID297oEJPlQkxKvmlCdah7E8AZFqcxJb2DXHv7u0/nF
	oSeYPpk8/7TKHy8lQ59kxWJcQrzeuo+Btdfk4XCW0Epeq/C4IH93y+DabnXtEL+YLYWg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1ontfb-0004wQ-N0@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:35:03 +0000

commit c5215044578e88b401a1296ed6302df05c113c5f
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:10:16 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:10:16 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libs/light/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c        | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index d59b464192..d21f614ed7 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -131,6 +131,18 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               libxl__domain_build_state *state,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index a8c48b0bea..a049bc7f3e 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -45,11 +45,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:35:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430858.683042 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfn-0002ff-MB; Thu, 27 Oct 2022 03:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430858.683042; Thu, 27 Oct 2022 03:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfn-0002fX-Ih; Thu, 27 Oct 2022 03:35:15 +0000
Received: by outflank-mailman (input) for mailman id 430858;
 Thu, 27 Oct 2022 03:35:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfl-0002f4-S8
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfl-0000ko-RK
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfl-0004xc-QH
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=1OcpkeVizbhtN2Hm7QvDzDcN81FyRrhIuSkPFcZfY4w=; b=ZwOL390bsFRaU3rtxY8XgEKy7v
	zz3pCPFu23/5B/EhuNkJPXNSpm0aFMyx+1aSQDCafY/Lovoz+6qMyvT5lMSwlkXyktVcJsVERTqp9
	qFULLnO/Dtt62cAqvM2TBwHGJ+JhlVSgUchWDxxRWp8NznoJ2WcPONJ0lEOvGLgywkTk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1ontfl-0004xc-QH@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:35:13 +0000

commit 7ad38a39f08aadc1578bdb46ccabaad79ed0faee
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:10:34 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:10:34 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..470c9491a7 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -164,6 +164,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 223ec9694d..a5ffd952ec 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -985,6 +985,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1044,6 +1045,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 26c1342948..df0ec84f03 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2333,6 +2333,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2424,6 +2439,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2433,6 +2450,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index a049bc7f3e..4ab5ed4ab2 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -48,6 +48,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -64,9 +67,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6883d86277..c1055ff2a7 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -751,7 +799,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -878,7 +926,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -902,7 +950,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1644,7 +1692,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1668,6 +1716,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:35:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430859.683043 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfx-0002kH-ND; Thu, 27 Oct 2022 03:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430859.683043; Thu, 27 Oct 2022 03:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontfx-0002k9-KF; Thu, 27 Oct 2022 03:35:25 +0000
Received: by outflank-mailman (input) for mailman id 430859;
 Thu, 27 Oct 2022 03:35:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfv-0002jd-V9
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfv-0000ks-UT
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontfv-0004yb-Tf
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=YzHkpxjTfMvbHlQk1Ap5eWhk9VizA6Rxrz7DqHIu104=; b=h3a666opA9rUS3+RNuMuAvudW/
	S1n19Otjprca4+Mm98mu/jeggIRJ3zlNvN9ggHz5dKaaEGHKDfeNh0asaHUQrd2QqyYTnPgWjTnJq
	Yt5BQRm3+CPyUL2MWsTbvh9S7Pef8oMb56OBtvDpipuVAYVocoq84kqaNMUjRKursmRc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1ontfv-0004yb-Tf@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:35:23 +0000

commit bb43a10fefe494ab747b020fef3e823b63fc566d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:11:01 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:11:01 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 77bba98069..0523beb9b7 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2608,9 +2608,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2647,11 +2646,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
             reduce_status_for_pin(rd, act, status, readonly);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:35:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:35:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430860.683048 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontg7-0002q9-OW; Thu, 27 Oct 2022 03:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430860.683048; Thu, 27 Oct 2022 03:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontg7-0002q2-Ln; Thu, 27 Oct 2022 03:35:35 +0000
Received: by outflank-mailman (input) for mailman id 430860;
 Thu, 27 Oct 2022 03:35:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontg6-0002p9-28
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontg6-0000kw-1L
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontg6-0004zZ-0g
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=qC/C9gxGL9ccy99qcvTVTrh+Br0eCrrDamDEj5MdG4A=; b=wjQzhyqpI7TnsedaCZYkSTxVgg
	1DUGV5HaYVxgohRuxO+FSNaLUKb0x+khSgxjojXWbAer4Po2P0Z5WFBYit9idhOngKPtYsFlf87Hy
	MutbzDNr0ckACwQ8R7PDpSgyf45+vuNJBo1RUhLYmOhIK0YhNKJsiYshGdK9QBCXKg5Y=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] tools/libxl: Replace deprecated -soundhw on QEMU command line
Message-Id: <E1ontg6-0004zZ-0g@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:35:34 +0000

commit d65ebacb78901b695bc5e8a075ad1ad865a78928
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Tue Oct 11 15:13:15 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:13:15 2022 +0200

    tools/libxl: Replace deprecated -soundhw on QEMU command line
    
    -soundhw is deprecated since 825ff02911c9 ("audio: add soundhw
    deprecation notice"), QEMU v5.1, and is been remove for upcoming v7.1
    by 039a68373c45 ("introduce -audio as a replacement for -soundhw").
    
    Instead we can just add the sound card with "-device", for most option
    that "-soundhw" could handle. "-device" is an option that existed
    before QEMU 1.0, and could already be used to add audio hardware.
    
    The list of possible option for libxl's "soundhw" is taken the list
    from QEMU 7.0.
    
    The list of options for "soundhw" are listed in order of preference in
    the manual. The first three (hda, ac97, es1370) are PCI devices and
    easy to test on Linux, and the last four are ISA devices which doesn't
    seems to work out of the box on linux.
    
    The sound card 'pcspk' isn't listed even if it used to be accepted by
    '-soundhw' because QEMU crash when trying to add it to a Xen domain.
    Also, it wouldn't work with "-device" might need to be "-machine
    pcspk-audiodev=default" instead.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    master commit: 62ca138c2c052187783aca3957d3f47c4dcfd683
    master date: 2022-08-18 09:25:50 +0200
---
 docs/man/xl.cfg.5.pod.in                  |  6 +++---
 tools/libs/light/libxl_dm.c               | 19 ++++++++++++++++++-
 tools/libs/light/libxl_types_internal.idl | 10 ++++++++++
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index af7fae7c52..ef9505f913 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2523,9 +2523,9 @@ The form serial=DEVICE is also accepted for backwards compatibility.
 
 =item B<soundhw="DEVICE">
 
-Select the virtual sound card to expose to the guest. The valid
-devices are defined by the device model configuration, please see the
-B<qemu(1)> manpage for details. The default is not to export any sound
+Select the virtual sound card to expose to the guest. The valid devices are
+B<hda>, B<ac97>, B<es1370>, B<adlib>, B<cs4231a>, B<gus>, B<sb16> if there are
+available with the device model QEMU. The default is not to export any sound
 device.
 
 =item B<vkb_device=BOOLEAN>
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index ae5f35e0c3..b86e8ccc85 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1204,6 +1204,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     uint64_t ram_size;
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
+    int rc;
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1531,7 +1532,23 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
         if (b_info->u.hvm.soundhw) {
-            flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, NULL);
+            libxl__qemu_soundhw soundhw;
+
+            rc = libxl__qemu_soundhw_from_string(b_info->u.hvm.soundhw, &soundhw);
+            if (rc) {
+                LOGD(ERROR, guest_domid, "Unknown soundhw option '%s'", b_info->u.hvm.soundhw);
+                return ERROR_INVAL;
+            }
+
+            switch (soundhw) {
+            case LIBXL__QEMU_SOUNDHW_HDA:
+                flexarray_vappend(dm_args, "-device", "intel-hda",
+                                  "-device", "hda-duplex", NULL);
+                break;
+            default:
+                flexarray_append_pair(dm_args, "-device",
+                                      (char*)libxl__qemu_soundhw_to_string(soundhw));
+            }
         }
         if (!libxl__acpi_defbool_val(b_info)) {
             flexarray_append(dm_args, "-no-acpi");
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21dbb..caa08d3229 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -55,3 +55,13 @@ libxl__device_action = Enumeration("device_action", [
     (1, "ADD"),
     (2, "REMOVE"),
     ])
+
+libxl__qemu_soundhw = Enumeration("qemu_soundhw", [
+    (1, "ac97"),
+    (2, "adlib"),
+    (3, "cs4231a"),
+    (4, "es1370"),
+    (5, "gus"),
+    (6, "hda"),
+    (7, "sb16"),
+    ])
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:35:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:35:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430861.683052 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgH-0002tG-QD; Thu, 27 Oct 2022 03:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430861.683052; Thu, 27 Oct 2022 03:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgH-0002t9-NM; Thu, 27 Oct 2022 03:35:45 +0000
Received: by outflank-mailman (input) for mailman id 430861;
 Thu, 27 Oct 2022 03:35:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgG-0002sq-57
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgG-0000l2-4U
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgG-00050J-3g
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=MZfjmKsUX/yPtX8uDzpQMa9hNtLX+rvka7TpV2TXZfA=; b=A2gXIgB8VKhQTFFdU+vgWfFonx
	s0BraB6vnd2yPgSzBgXs7qnaSrfhdAkNw9Qu6AKOV78scyvfgD+8eTXb9mG4oXj8C+OXoKXkgrsoi
	LxUk1icjFKf57OE8Ft+Korc/zt7EjCWI8MJMMQ8PutS+Xik+6A86h8RrKdoez/gPdhX0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
Message-Id: <E1ontgG-00050J-3g@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:35:44 +0000

commit 7923ea47e578bca30a6e45951a9da09e827ff028
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:14:05 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:14:05 2022 +0200

    x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
    
    While the SDM isn't very clear about this, our present behavior make
    Linux 5.19 unhappy. As of commit 8ad7e8f69695 ("x86/fpu/xsave: Support
    XSAVEC in the kernel") they're using this CPUID output also to size
    the compacted area used by XSAVEC. Getting back zero there isn't really
    liked, yet for PV that's the default on capable hardware: XSAVES isn't
    exposed to PV domains.
    
    Considering that the size reported is that of the compacted save area,
    I view Linux'es assumption as appropriate (short of the SDM properly
    considering the case). Therefore we need to populate the field also when
    only XSAVEC is supported for a guest.
    
    Fixes: 460b9a4b3630 ("x86/xsaves: enable xsaves/xrstors for hvm guest")
    Fixes: 8d050ed1097c ("x86: don't expose XSAVES capability to PV guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: c3bd0b83ea5b7c0da6542687436042eeea1e7909
    master date: 2022-08-24 14:23:59 +0200
---
 xen/arch/x86/cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index ee2c4ea03a..11c95178f1 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1052,7 +1052,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         switch ( subleaf )
         {
         case 1:
-            if ( p->xstate.xsaves )
+            if ( p->xstate.xsavec || p->xstate.xsaves )
             {
                 /*
                  * TODO: Figure out what to do for XSS state.  VT-x manages
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:35:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430862.683056 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgR-0002x7-Sq; Thu, 27 Oct 2022 03:35:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430862.683056; Thu, 27 Oct 2022 03:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgR-0002wz-QI; Thu, 27 Oct 2022 03:35:55 +0000
Received: by outflank-mailman (input) for mailman id 430862;
 Thu, 27 Oct 2022 03:35:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgQ-0002wh-8U
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgQ-0000lD-7l
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgQ-00051E-70
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:35:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=SJ9HQTIysbsla6PQV+hJjqe9iTVxWpo5CFYgeyzyUEI=; b=Oemw0B0v+B+XojFTqgqppCLtkX
	kofpqqPqAeXuYDJUjs2mL4NoGa4WCCa20oUmEr7VqF91Czeyq2k6KVT80DwbuLBoAHn6pSzgHK6ze
	z6NTzAbTVPOTAqOjLvx2rN0A42Y/Ts0W1+/7XBzpD6YyZc2VuTb8q27gAveAdnRUivEY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/sched: introduce cpupool_update_node_affinity()
Message-Id: <E1ontgQ-00051E-70@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:35:54 +0000

commit 735b10844489babf52d3193193285a7311cf2c39
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:14:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:14:22 2022 +0200

    xen/sched: introduce cpupool_update_node_affinity()
    
    For updating the node affinities of all domains in a cpupool add a new
    function cpupool_update_node_affinity().
    
    In order to avoid multiple allocations of cpumasks carve out memory
    allocation and freeing from domain_update_node_affinity() into new
    helpers, which can be used by cpupool_update_node_affinity().
    
    Modify domain_update_node_affinity() to take an additional parameter
    for passing the allocated memory in and to allocate and free the memory
    via the new helpers in case NULL was passed.
    
    This will help later to pre-allocate the cpumasks in order to avoid
    allocations in stop-machine context.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a83fa1e2b96ace65b45dde6954d67012633a082b
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 54 +++++++++++++++++++++++++++++++---------------
 xen/common/sched/cpupool.c | 39 ++++++++++++++++++---------------
 xen/common/sched/private.h |  7 ++++++
 xen/include/xen/sched.h    |  9 +++++++-
 4 files changed, 74 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index f07bd2681f..065a83eca9 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1824,9 +1824,28 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
     return ret;
 }
 
-void domain_update_node_affinity(struct domain *d)
+bool alloc_affinity_masks(struct affinity_masks *affinity)
 {
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
+    if ( !alloc_cpumask_var(&affinity->hard) )
+        return false;
+    if ( !alloc_cpumask_var(&affinity->soft) )
+    {
+        free_cpumask_var(affinity->hard);
+        return false;
+    }
+
+    return true;
+}
+
+void free_affinity_masks(struct affinity_masks *affinity)
+{
+    free_cpumask_var(affinity->soft);
+    free_cpumask_var(affinity->hard);
+}
+
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity)
+{
+    struct affinity_masks masks;
     cpumask_t *dom_affinity;
     const cpumask_t *online;
     struct sched_unit *unit;
@@ -1836,14 +1855,16 @@ void domain_update_node_affinity(struct domain *d)
     if ( !d->vcpu || !d->vcpu[0] )
         return;
 
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
+    if ( !affinity )
     {
-        free_cpumask_var(dom_cpumask);
-        return;
+        affinity = &masks;
+        if ( !alloc_affinity_masks(affinity) )
+            return;
     }
 
+    cpumask_clear(affinity->hard);
+    cpumask_clear(affinity->soft);
+
     online = cpupool_domain_master_cpumask(d);
 
     spin_lock(&d->node_affinity_lock);
@@ -1864,22 +1885,21 @@ void domain_update_node_affinity(struct domain *d)
          */
         for_each_sched_unit ( d, unit )
         {
-            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-                       unit->cpu_soft_affinity);
+            cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affinity);
+            cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affinity);
         }
         /* Filter out non-online cpus */
-        cpumask_and(dom_cpumask, dom_cpumask, online);
-        ASSERT(!cpumask_empty(dom_cpumask));
+        cpumask_and(affinity->hard, affinity->hard, online);
+        ASSERT(!cpumask_empty(affinity->hard));
         /* And compute the intersection between hard, online and soft */
-        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
+        cpumask_and(affinity->soft, affinity->soft, affinity->hard);
 
         /*
          * If not empty, the intersection of hard, soft and online is the
          * narrowest set we want. If empty, we fall back to hard&online.
          */
-        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-                           dom_cpumask : dom_cpumask_soft;
+        dom_affinity = cpumask_empty(affinity->soft) ? affinity->hard
+                                                     : affinity->soft;
 
         nodes_clear(d->node_affinity);
         for_each_cpu ( cpu, dom_affinity )
@@ -1888,8 +1908,8 @@ void domain_update_node_affinity(struct domain *d)
 
     spin_unlock(&d->node_affinity_lock);
 
-    free_cpumask_var(dom_cpumask_soft);
-    free_cpumask_var(dom_cpumask);
+    if ( affinity == &masks )
+        free_affinity_masks(affinity);
 }
 
 typedef long ret_t;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 8c6e6eb9cc..45b6ff9956 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -401,6 +401,25 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
     return ret;
 }
 
+/* Update affinities of all domains in a cpupool. */
+static void cpupool_update_node_affinity(const struct cpupool *c)
+{
+    struct affinity_masks masks;
+    struct domain *d;
+
+    if ( !alloc_affinity_masks(&masks) )
+        return;
+
+    rcu_read_lock(&domlist_read_lock);
+
+    for_each_domain_in_cpupool(d, c)
+        domain_update_node_aff(d, &masks);
+
+    rcu_read_unlock(&domlist_read_lock);
+
+    free_affinity_masks(&masks);
+}
+
 /*
  * assign a specific cpu to a cpupool
  * cpupool_lock must be held
@@ -408,7 +427,6 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
-    struct domain *d;
     const cpumask_t *cpus;
 
     cpus = sched_get_opt_cpumask(c->gran, cpu);
@@ -433,12 +451,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    rcu_read_lock(&domlist_read_lock);
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return 0;
 }
@@ -447,18 +460,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
-    struct domain *d;
     int ret;
 
     if ( c != cpupool_cpu_moving )
         return -EADDRNOTAVAIL;
 
-    /*
-     * We need this for scanning the domain list, both in
-     * cpu_disable_scheduler(), and at the bottom of this function.
-     */
     rcu_read_lock(&domlist_read_lock);
     ret = cpu_disable_scheduler(cpu);
+    rcu_read_unlock(&domlist_read_lock);
 
     rcu_read_lock(&sched_res_rculock);
     cpus = get_sched_res(cpu)->cpus;
@@ -485,11 +494,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return ret;
 }
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 92d0d49610..6e036f8c80 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -593,6 +593,13 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step,
         cpumask_copy(mask, unit->cpu_hard_affinity);
 }
 
+struct affinity_masks {
+    cpumask_var_t hard;
+    cpumask_var_t soft;
+};
+
+bool alloc_affinity_masks(struct affinity_masks *affinity);
+void free_affinity_masks(struct affinity_masks *affinity);
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 701963f84c..4e25627d96 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -649,8 +649,15 @@ static inline void get_knownalive_domain(struct domain *d)
     ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
 }
 
+struct affinity_masks;
+
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
-void domain_update_node_affinity(struct domain *d);
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity);
+
+static inline void domain_update_node_affinity(struct domain *d)
+{
+    domain_update_node_aff(d, NULL);
+}
 
 /*
  * To be implemented by each architecture, sanity checking the configuration
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:36:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:36:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430863.683062 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgc-00030L-0x; Thu, 27 Oct 2022 03:36:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430863.683062; Thu, 27 Oct 2022 03:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgb-000305-Rr; Thu, 27 Oct 2022 03:36:05 +0000
Received: by outflank-mailman (input) for mailman id 430863;
 Thu, 27 Oct 2022 03:36:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontga-0002zt-Bc
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontga-0000la-Av
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontga-00051y-AF
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=m9ur6HEQhAYcleocKYDWuw1T4X9850uj3x/p/fY188k=; b=TLOk+GeDGfi5nqd7l8kiBiAZ4o
	UF5waHhO6/jZmiYpeG6txli/zvVUCShOTaxmLtyOgUd0spX1rXOMMzNptz1qX1FtIW65QnchXdtqp
	kdzJ/o/+eAEhkR9rf41gwxf4K/iS0RV3IWe3ZFuBIauVT39edGWgJtBRyb3tK5Td9XT8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
Message-Id: <E1ontga-00051y-AF@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:36:04 +0000

commit d638c2085f71f694344b34e70eb1b371c86b00f0
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:15:14 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:15:14 2022 +0200

    xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
    
    In order to prepare not allocating or freeing memory from
    schedule_cpu_rm(), move this functionality to dedicated functions.
    
    For now call those functions from schedule_cpu_rm().
    
    No change of behavior expected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d42be6f83480b3ada286dc18444331a816be88a3
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 143 +++++++++++++++++++++++++++------------------
 xen/common/sched/private.h |  11 ++++
 2 files changed, 98 insertions(+), 56 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 065a83eca9..2decb1161a 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3221,6 +3221,75 @@ out:
     return ret;
 }
 
+/*
+ * Allocate all memory needed for free_cpu_rm_data(), as allocations cannot
+ * be made in stop_machine() context.
+ *
+ * Between alloc_cpu_rm_data() and the real cpu removal action the relevant
+ * contents of struct sched_resource can't change, as the cpu in question is
+ * locked against any other movement to or from cpupools, and the data copied
+ * by alloc_cpu_rm_data() is modified only in case the cpu in question is
+ * being moved from or to a cpupool.
+ */
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+{
+    struct cpu_rm_data *data;
+    const struct sched_resource *sr;
+    unsigned int idx;
+
+    rcu_read_lock(&sched_res_rculock);
+
+    sr = get_sched_res(cpu);
+    data = xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);
+    if ( !data )
+        goto out;
+
+    data->old_ops = sr->scheduler;
+    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
+    data->ppriv_old = sr->sched_priv;
+
+    for ( idx = 0; idx < sr->granularity - 1; idx++ )
+    {
+        data->sr[idx] = sched_alloc_res();
+        if ( data->sr[idx] )
+        {
+            data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
+            if ( !data->sr[idx]->sched_unit_idle )
+            {
+                sched_res_free(&data->sr[idx]->rcu);
+                data->sr[idx] = NULL;
+            }
+        }
+        if ( !data->sr[idx] )
+        {
+            while ( idx > 0 )
+                sched_res_free(&data->sr[--idx]->rcu);
+            XFREE(data);
+            goto out;
+        }
+
+        data->sr[idx]->curr = data->sr[idx]->sched_unit_idle;
+        data->sr[idx]->scheduler = &sched_idle_ops;
+        data->sr[idx]->granularity = 1;
+
+        /* We want the lock not to change when replacing the resource. */
+        data->sr[idx]->schedule_lock = sr->schedule_lock;
+    }
+
+ out:
+    rcu_read_unlock(&sched_res_rculock);
+
+    return data;
+}
+
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
+{
+    sched_free_udata(mem->old_ops, mem->vpriv_old);
+    sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+
+    xfree(mem);
+}
+
 /*
  * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops
  * (the idle scheduler).
@@ -3229,53 +3298,23 @@ out:
  */
 int schedule_cpu_rm(unsigned int cpu)
 {
-    void *ppriv_old, *vpriv_old;
-    struct sched_resource *sr, **sr_new = NULL;
+    struct sched_resource *sr;
+    struct cpu_rm_data *data;
     struct sched_unit *unit;
-    struct scheduler *old_ops;
     spinlock_t *old_lock;
     unsigned long flags;
-    int idx, ret = -ENOMEM;
+    int idx = 0;
     unsigned int cpu_iter;
 
+    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        return -ENOMEM;
+
     rcu_read_lock(&sched_res_rculock);
 
     sr = get_sched_res(cpu);
-    old_ops = sr->scheduler;
-
-    if ( sr->granularity > 1 )
-    {
-        sr_new = xmalloc_array(struct sched_resource *, sr->granularity - 1);
-        if ( !sr_new )
-            goto out;
-        for ( idx = 0; idx < sr->granularity - 1; idx++ )
-        {
-            sr_new[idx] = sched_alloc_res();
-            if ( sr_new[idx] )
-            {
-                sr_new[idx]->sched_unit_idle = sched_alloc_unit_mem();
-                if ( !sr_new[idx]->sched_unit_idle )
-                {
-                    sched_res_free(&sr_new[idx]->rcu);
-                    sr_new[idx] = NULL;
-                }
-            }
-            if ( !sr_new[idx] )
-            {
-                for ( idx--; idx >= 0; idx-- )
-                    sched_res_free(&sr_new[idx]->rcu);
-                goto out;
-            }
-            sr_new[idx]->curr = sr_new[idx]->sched_unit_idle;
-            sr_new[idx]->scheduler = &sched_idle_ops;
-            sr_new[idx]->granularity = 1;
 
-            /* We want the lock not to change when replacing the resource. */
-            sr_new[idx]->schedule_lock = sr->schedule_lock;
-        }
-    }
-
-    ret = 0;
+    ASSERT(sr->granularity);
     ASSERT(sr->cpupool != NULL);
     ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus));
     ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid));
@@ -3283,10 +3322,6 @@ int schedule_cpu_rm(unsigned int cpu)
     /* See comment in schedule_cpu_add() regarding lock switching. */
     old_lock = pcpu_schedule_lock_irqsave(cpu, &flags);
 
-    vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
-    ppriv_old = sr->sched_priv;
-
-    idx = 0;
     for_each_cpu ( cpu_iter, sr->cpus )
     {
         per_cpu(sched_res_idx, cpu_iter) = 0;
@@ -3300,27 +3335,27 @@ int schedule_cpu_rm(unsigned int cpu)
         else
         {
             /* Initialize unit. */
-            unit = sr_new[idx]->sched_unit_idle;
-            unit->res = sr_new[idx];
+            unit = data->sr[idx]->sched_unit_idle;
+            unit->res = data->sr[idx];
             unit->is_running = true;
             sched_unit_add_vcpu(unit, idle_vcpu[cpu_iter]);
             sched_domain_insert_unit(unit, idle_vcpu[cpu_iter]->domain);
 
             /* Adjust cpu masks of resources (old and new). */
             cpumask_clear_cpu(cpu_iter, sr->cpus);
-            cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus);
+            cpumask_set_cpu(cpu_iter, data->sr[idx]->cpus);
             cpumask_set_cpu(cpu_iter, &sched_res_mask);
 
             /* Init timer. */
-            init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
+            init_timer(&data->sr[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
 
             /* Last resource initializations and insert resource pointer. */
-            sr_new[idx]->master_cpu = cpu_iter;
-            set_sched_res(cpu_iter, sr_new[idx]);
+            data->sr[idx]->master_cpu = cpu_iter;
+            set_sched_res(cpu_iter, data->sr[idx]);
 
             /* Last action: set the new lock pointer. */
             smp_mb();
-            sr_new[idx]->schedule_lock = &sched_free_cpu_lock;
+            data->sr[idx]->schedule_lock = &sched_free_cpu_lock;
 
             idx++;
         }
@@ -3336,16 +3371,12 @@ int schedule_cpu_rm(unsigned int cpu)
     /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */
     spin_unlock_irqrestore(old_lock, flags);
 
-    sched_deinit_pdata(old_ops, ppriv_old, cpu);
-
-    sched_free_udata(old_ops, vpriv_old);
-    sched_free_pdata(old_ops, ppriv_old, cpu);
+    sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
-out:
     rcu_read_unlock(&sched_res_rculock);
-    xfree(sr_new);
+    free_cpu_rm_data(data, cpu);
 
-    return ret;
+    return 0;
 }
 
 struct scheduler *scheduler_get_default(void)
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 6e036f8c80..ff31854252 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -600,6 +600,15 @@ struct affinity_masks {
 
 bool alloc_affinity_masks(struct affinity_masks *affinity);
 void free_affinity_masks(struct affinity_masks *affinity);
+
+/* Memory allocation related data for schedule_cpu_rm(). */
+struct cpu_rm_data {
+    const struct scheduler *old_ops;
+    void *ppriv_old;
+    void *vpriv_old;
+    struct sched_resource *sr[];
+};
+
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
@@ -608,6 +617,8 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
 int schedule_cpu_rm(unsigned int cpu);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:36:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430864.683064 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgm-0003Da-0S; Thu, 27 Oct 2022 03:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430864.683064; Thu, 27 Oct 2022 03:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgl-0003DT-Td; Thu, 27 Oct 2022 03:36:15 +0000
Received: by outflank-mailman (input) for mailman id 430864;
 Thu, 27 Oct 2022 03:36:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgk-0003DH-Ew
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgk-0000m3-E8
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgk-00052P-DO
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=vd4TOx7Ytgo+d/IuT3BTyEWSRdQkNfGIrU2uShtzhWU=; b=pY5qKt+0klhJZMR8dxNTlGLafx
	9IK+ZnixC53q/lqAHuD+BkIMF8pd+5boMshW0NJ0/SX7vxTHVMk+dEEQZWB1HkbsUH0H72z1hxAOW
	XT5SCpTZDGvUoqwitWJRTbnUQhlIQLvqf4EefT8hB7DbrMUx2j+bkZ2oBNxy8yeCLJCA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/sched: fix cpu hotplug
Message-Id: <E1ontgk-00052P-DO@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:36:14 +0000

commit d17680808b4c8015e31070c971e1ee548170ae34
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:15:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:15:41 2022 +0200

    xen/sched: fix cpu hotplug
    
    Cpu unplugging is calling schedule_cpu_rm() via stop_machine_run() with
    interrupts disabled, thus any memory allocation or freeing must be
    avoided.
    
    Since commit 5047cd1d5dea ("xen/common: Use enhanced
    ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced
    via an assertion, which will now fail.
    
    Fix this by allocating needed memory before entering stop_machine_run()
    and freeing any memory only after having finished stop_machine_run().
    
    Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_cpu_[add/rm]()")
    Reported-by: Gao Ruifeng <ruifeng.gao@intel.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d84473689611eed32fd90b27e614f28af767fa3f
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 25 +++++++++++++----
 xen/common/sched/cpupool.c | 69 ++++++++++++++++++++++++++++++++++++----------
 xen/common/sched/private.h |  5 ++--
 3 files changed, 77 insertions(+), 22 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 2decb1161a..900aab8f66 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3231,7 +3231,7 @@ out:
  * by alloc_cpu_rm_data() is modified only in case the cpu in question is
  * being moved from or to a cpupool.
  */
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc)
 {
     struct cpu_rm_data *data;
     const struct sched_resource *sr;
@@ -3244,6 +3244,17 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
     if ( !data )
         goto out;
 
+    if ( aff_alloc )
+    {
+        if ( !alloc_affinity_masks(&data->affinity) )
+        {
+            XFREE(data);
+            goto out;
+        }
+    }
+    else
+        memset(&data->affinity, 0, sizeof(data->affinity));
+
     data->old_ops = sr->scheduler;
     data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
     data->ppriv_old = sr->sched_priv;
@@ -3264,6 +3275,7 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
         {
             while ( idx > 0 )
                 sched_res_free(&data->sr[--idx]->rcu);
+            free_affinity_masks(&data->affinity);
             XFREE(data);
             goto out;
         }
@@ -3286,6 +3298,7 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
 {
     sched_free_udata(mem->old_ops, mem->vpriv_old);
     sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+    free_affinity_masks(&mem->affinity);
 
     xfree(mem);
 }
@@ -3296,17 +3309,18 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
  * The cpu is already marked as "free" and not valid any longer for its
  * cpupool.
  */
-int schedule_cpu_rm(unsigned int cpu)
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data)
 {
     struct sched_resource *sr;
-    struct cpu_rm_data *data;
     struct sched_unit *unit;
     spinlock_t *old_lock;
     unsigned long flags;
     int idx = 0;
     unsigned int cpu_iter;
+    bool free_data = !data;
 
-    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        data = alloc_cpu_rm_data(cpu, false);
     if ( !data )
         return -ENOMEM;
 
@@ -3374,7 +3388,8 @@ int schedule_cpu_rm(unsigned int cpu)
     sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-    free_cpu_rm_data(data, cpu);
+    if ( free_data )
+        free_cpu_rm_data(data, cpu);
 
     return 0;
 }
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 45b6ff9956..b5a948639a 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -402,22 +402,28 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 }
 
 /* Update affinities of all domains in a cpupool. */
-static void cpupool_update_node_affinity(const struct cpupool *c)
+static void cpupool_update_node_affinity(const struct cpupool *c,
+                                         struct affinity_masks *masks)
 {
-    struct affinity_masks masks;
+    struct affinity_masks local_masks;
     struct domain *d;
 
-    if ( !alloc_affinity_masks(&masks) )
-        return;
+    if ( !masks )
+    {
+        if ( !alloc_affinity_masks(&local_masks) )
+            return;
+        masks = &local_masks;
+    }
 
     rcu_read_lock(&domlist_read_lock);
 
     for_each_domain_in_cpupool(d, c)
-        domain_update_node_aff(d, &masks);
+        domain_update_node_aff(d, masks);
 
     rcu_read_unlock(&domlist_read_lock);
 
-    free_affinity_masks(&masks);
+    if ( masks == &local_masks )
+        free_affinity_masks(masks);
 }
 
 /*
@@ -451,15 +457,17 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, NULL);
 
     return 0;
 }
 
-static int cpupool_unassign_cpu_finish(struct cpupool *c)
+static int cpupool_unassign_cpu_finish(struct cpupool *c,
+                                       struct cpu_rm_data *mem)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
+    struct affinity_masks *masks = mem ? &mem->affinity : NULL;
     int ret;
 
     if ( c != cpupool_cpu_moving )
@@ -482,7 +490,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
      */
     if ( !ret )
     {
-        ret = schedule_cpu_rm(cpu);
+        ret = schedule_cpu_rm(cpu, mem);
         if ( ret )
             cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus);
         else
@@ -494,7 +502,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, masks);
 
     return ret;
 }
@@ -558,7 +566,7 @@ static long cpupool_unassign_cpu_helper(void *info)
                       cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu);
     spin_lock(&cpupool_lock);
 
-    ret = cpupool_unassign_cpu_finish(c);
+    ret = cpupool_unassign_cpu_finish(c, NULL);
 
     spin_unlock(&cpupool_lock);
     debugtrace_printk("cpupool_unassign_cpu ret=%ld\n", ret);
@@ -701,7 +709,7 @@ static int cpupool_cpu_add(unsigned int cpu)
  * This function is called in stop_machine context, so we can be sure no
  * non-idle vcpu is active on the system.
  */
-static void cpupool_cpu_remove(unsigned int cpu)
+static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem)
 {
     int ret;
 
@@ -709,7 +717,7 @@ static void cpupool_cpu_remove(unsigned int cpu)
 
     if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
     {
-        ret = cpupool_unassign_cpu_finish(cpupool0);
+        ret = cpupool_unassign_cpu_finish(cpupool0, mem);
         BUG_ON(ret);
     }
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
@@ -775,7 +783,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu)
         {
             ret = cpupool_unassign_cpu_start(c, master_cpu);
             BUG_ON(ret);
-            ret = cpupool_unassign_cpu_finish(c);
+            ret = cpupool_unassign_cpu_finish(c, NULL);
             BUG_ON(ret);
         }
     }
@@ -993,12 +1001,24 @@ void dump_runq(unsigned char key)
 static int cpu_callback(
     struct notifier_block *nfb, unsigned long action, void *hcpu)
 {
+    static struct cpu_rm_data *mem;
+
     unsigned int cpu = (unsigned long)hcpu;
     int rc = 0;
 
     switch ( action )
     {
     case CPU_DOWN_FAILED:
+        if ( system_state <= SYS_STATE_active )
+        {
+            if ( mem )
+            {
+                free_cpu_rm_data(mem, cpu);
+                mem = NULL;
+            }
+            rc = cpupool_cpu_add(cpu);
+        }
+        break;
     case CPU_ONLINE:
         if ( system_state <= SYS_STATE_active )
             rc = cpupool_cpu_add(cpu);
@@ -1006,12 +1026,31 @@ static int cpu_callback(
     case CPU_DOWN_PREPARE:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
+        {
             rc = cpupool_cpu_remove_prologue(cpu);
+            if ( !rc )
+            {
+                ASSERT(!mem);
+                mem = alloc_cpu_rm_data(cpu, true);
+                rc = mem ? 0 : -ENOMEM;
+            }
+        }
         break;
     case CPU_DYING:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
-            cpupool_cpu_remove(cpu);
+        {
+            ASSERT(mem);
+            cpupool_cpu_remove(cpu, mem);
+        }
+        break;
+    case CPU_DEAD:
+        if ( system_state <= SYS_STATE_active )
+        {
+            ASSERT(mem);
+            free_cpu_rm_data(mem, cpu);
+            mem = NULL;
+        }
         break;
     case CPU_RESUME_FAILED:
         cpupool_cpu_remove_forced(cpu);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index ff31854252..3bab78ccb2 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -603,6 +603,7 @@ void free_affinity_masks(struct affinity_masks *affinity);
 
 /* Memory allocation related data for schedule_cpu_rm(). */
 struct cpu_rm_data {
+    struct affinity_masks affinity;
     const struct scheduler *old_ops;
     void *ppriv_old;
     void *vpriv_old;
@@ -617,9 +618,9 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc);
 void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
-int schedule_cpu_rm(unsigned int cpu);
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
 void cpupool_put(struct cpupool *pool);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:36:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:36:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430865.683068 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgw-0003Hj-3o; Thu, 27 Oct 2022 03:36:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430865.683068; Thu, 27 Oct 2022 03:36:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontgw-0003Hc-0P; Thu, 27 Oct 2022 03:36:26 +0000
Received: by outflank-mailman (input) for mailman id 430865;
 Thu, 27 Oct 2022 03:36:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgu-0003HC-Hg
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgu-0000mD-Gv
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontgu-00052z-GH
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=JoarR3uxeo9q68J664V6uzHguXM9lteuSU52TOyNLLA=; b=vGCtKt6IecL0/TPtPV/KmhIkRF
	JSlkxXi2lpkF8fAepDowMY1D9Xu/WO0s+m/OMYvX+26Q3qArGsFU1B7Re4XK3kr3lgsLYMyfNyu1v
	Cz47f4hniO3AwR59v5qdAqHCXblf+cZ+BKY7Z8nLwx/YdwRuIz/II23MUDBFuq2vIT6s=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
Message-Id: <E1ontgu-00052z-GH@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:36:24 +0000

commit 19cf28b515f21da02df80e68f901ad7650daaa37
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:15:55 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:15:55 2022 +0200

    Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
    
    I haven't been able to find evidence of "-nopie" ever having been a
    supported compiler option. The correct spelling is "-no-pie".
    Furthermore like "-pie" this is an option which is solely passed to the
    linker. The compiler only recognizes "-fpie" / "-fPIE" / "-fno-pie", and
    it doesn't infer these options from "-pie" / "-no-pie".
    
    Add the compiler recognized form, but for the possible case of the
    variable also being used somewhere for linking keep the linker option as
    well (with corrected spelling).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    
    Build: Drop -no-pie from EMBEDDED_EXTRA_CFLAGS
    
    This breaks all Clang builds, as demostrated by Gitlab CI.
    
    Contrary to the description in ecd6b9759919, -no-pie is not even an option
    passed to the linker.  GCC's actual behaviour is to inhibit the passing of
    -pie to the linker, as well as selecting different cr0 artefacts to be linked.
    
    EMBEDDED_EXTRA_CFLAGS is not used for $(CC)-doing-linking, and not liable to
    gain such a usecase.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>
    Fixes: ecd6b9759919 ("Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS")
    master commit: ecd6b9759919fa6335b0be1b5fc5cce29a30c4f1
    master date: 2022-09-08 09:25:26 +0200
    master commit: 13a7c0074ac8fb31f6c0485429b7a20a1946cb22
    master date: 2022-09-27 15:40:42 -0700
---
 Config.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 96d89b2f7d..9f87608f66 100644
--- a/Config.mk
+++ b/Config.mk
@@ -203,7 +203,7 @@ endif
 APPEND_LDFLAGS += $(foreach i, $(APPEND_LIB), -L$(i))
 APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
 
-EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
+EMBEDDED_EXTRA_CFLAGS := -fno-pie -fno-stack-protector -fno-stack-protector-all
 EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
 
 XEN_EXTFILES_URL ?= http://xenbits.xen.org/xen-extfiles
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:36:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:36:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430866.683073 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onth6-0003Ky-5L; Thu, 27 Oct 2022 03:36:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430866.683073; Thu, 27 Oct 2022 03:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onth6-0003Kq-2B; Thu, 27 Oct 2022 03:36:36 +0000
Received: by outflank-mailman (input) for mailman id 430866;
 Thu, 27 Oct 2022 03:36:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onth4-0003KV-Ke
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onth4-0000mR-K0
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onth4-00053k-JC
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RZ1X0o86SVAnshH4y4Tagorpmq5G3o+jf8LgUvsZUuM=; b=TLZqPg2dgQY074OvywyconBUnv
	pUvagkJ1i3/sipXjlVU8QirJjdZqa0mxPZEMt0uqaCT9fHDFA+uqNFWTsuTHI9NIO2ZTmaAehsIEc
	jvmg+JjP0SA4/sZZ7TTav9hJWqDxqk9lP/EqI4HVciimh4wKjKLm0YCEMxVx+MXLf3Rs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] tools/xenstore: minor fix of the migration stream doc
Message-Id: <E1onth4-00053k-JC@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:36:34 +0000

commit 182f8bb503b9dd3db5dd9118dc763d241787c6fc
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:16:09 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:16:09 2022 +0200

    tools/xenstore: minor fix of the migration stream doc
    
    Drop mentioning the non-existent read-only socket in the migration
    stream description document.
    
    The related record field was removed in commit 8868a0e3f674 ("docs:
    update the xenstore migration stream documentation).
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: ace1d2eff80d3d66c37ae765dae3e3cb5697e5a4
    master date: 2022-09-08 09:25:58 +0200
---
 docs/designs/xenstore-migration.md | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 5f1155273e..78530bbb0e 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -129,11 +129,9 @@ xenstored state that needs to be restored.
 | `evtchn-fd`    | The file descriptor used to communicate with |
 |                | the event channel driver                     |
 
-xenstored will resume in the original process context. Hence `rw-socket-fd` and
-`ro-socket-fd` simply specify the file descriptors of the sockets. Sockets
-are not always used, however, and so -1 will be used to denote an unused
-socket.
-
+xenstored will resume in the original process context. Hence `rw-socket-fd`
+simply specifies the file descriptor of the socket. Sockets are not always
+used, however, and so -1 will be used to denote an unused socket.
 
 \pagebreak
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:36:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430867.683076 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthG-0003O3-6E; Thu, 27 Oct 2022 03:36:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430867.683076; Thu, 27 Oct 2022 03:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthG-0003Nv-3e; Thu, 27 Oct 2022 03:36:46 +0000
Received: by outflank-mailman (input) for mailman id 430867;
 Thu, 27 Oct 2022 03:36:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthE-0003Ni-Nz
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthE-0000o2-NJ
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthE-00054o-ML
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=iSM3o4cH3u1HrRXE0pMQE/WD/nQ2X4gHFo8TWyplVNo=; b=2ixjDe3XQ6u+echbYdpMFLvjgH
	67M+uvOLOQTLYwqeDvD+ynxLahEYdV3zzX0pqEKcqo1jdnB63Ep6DeCbd9otBmPzXCVmR2l8VZe4X
	xrVZNhvlVC0regromYOqDwQGc3afdOD+BcuCloWOdjO9oOt6ytYNhSk79OF1vfRmWNLw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/gnttab: fix gnttab_acquire_resource()
Message-Id: <E1onthE-00054o-ML@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:36:44 +0000

commit 3ac64b3751837a117ee3dfb3e2cc27057a83d0f7
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:16:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:16:53 2022 +0200

    xen/gnttab: fix gnttab_acquire_resource()
    
    Commit 9dc46386d89d ("gnttab: work around "may be used uninitialized"
    warning") was wrong, as vaddrs can legitimately be NULL in case
    XENMEM_resource_grant_table_id_status was specified for a grant table
    v1. This would result in crashes in debug builds due to
    ASSERT_UNREACHABLE() triggering.
    
    Check vaddrs only to be NULL in the rc == 0 case.
    
    Expand the tests in tools/tests/resource to tickle this path, and verify that
    using XENMEM_resource_grant_table_id_status on a v1 grant table fails.
    
    Fixes: 9dc46386d89d ("gnttab: work around "may be used uninitialized" warning")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com> # xen
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 52daa6a8483e4fbd6757c9d1b791e23931791608
    master date: 2022-09-09 16:28:38 +0100
---
 tools/tests/resource/test-resource.c | 15 +++++++++++++++
 xen/common/grant_table.c             |  2 +-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
index 1caaa60e62..bf485baff2 100644
--- a/tools/tests/resource/test-resource.c
+++ b/tools/tests/resource/test-resource.c
@@ -63,6 +63,21 @@ static void test_gnttab(uint32_t domid, unsigned int nr_frames)
     rc = xenforeignmemory_unmap_resource(fh, res);
     if ( rc )
         return fail("    Fail: Unmap %d - %s\n", errno, strerror(errno));
+
+    /*
+     * Verify that an attempt to map the status frames fails, as the domain is
+     * in gnttab v1 mode.
+     */
+    res = xenforeignmemory_map_resource(
+        fh, domid, XENMEM_resource_grant_table,
+        XENMEM_resource_grant_table_id_status, 0, 1,
+        (void **)&gnttab, PROT_READ | PROT_WRITE, 0);
+
+    if ( res )
+    {
+        fail("    Fail: Managed to map gnttab v2 status frames in v1 mode\n");
+        xenforeignmemory_unmap_resource(fh, res);
+    }
 }
 
 static void test_domain_configurations(void)
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 0523beb9b7..01e426c67f 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -4138,7 +4138,7 @@ int gnttab_acquire_resource(
      * on non-error paths, and hence it needs setting to NULL at the top of the
      * function.  Leave some runtime safety.
      */
-    if ( !vaddrs )
+    if ( !rc && !vaddrs )
     {
         ASSERT_UNREACHABLE();
         rc = -ENODATA;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:36:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:36:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430868.683079 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthQ-0003RV-7j; Thu, 27 Oct 2022 03:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430868.683079; Thu, 27 Oct 2022 03:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthQ-0003RN-59; Thu, 27 Oct 2022 03:36:56 +0000
Received: by outflank-mailman (input) for mailman id 430868;
 Thu, 27 Oct 2022 03:36:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthO-0003R4-Qn
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthO-0000oC-Q8
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthO-00055i-PP
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:36:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=adLiuTXpI7gbKeCX2SuSD2uC/os5NqM4dl4TLuiEJCg=; b=MJxy0YDyLkapUvhdVM/hoABCOb
	WSLmIYrrH2Kgb3LFNwxodaa5JHW7zWIxvxK90dLkJF+IVKyTzVQzXFUaXmbpGjRQCv0Wppsw0iv+w
	ErAaDXn/Dn1Ti/izUCh7xPiWt9Ybck3E1tUWIM0wkpkKgSK0zjd4SFNFHiM623dn0AEI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
Message-Id: <E1onthO-00055i-PP@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:36:54 +0000

commit 62e534d17cdd838828bfd75d3d845e31524dd336
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:17:12 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:17:12 2022 +0200

    x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
    
    Forever sinced its introduction VCPUOP_register_vcpu_time_memory_area
    was available only to native domains. Linux, for example, would attempt
    to use it irrespective of guest bitness (including in its so called
    PVHVM mode) as long as it finds XEN_PVCLOCK_TSC_STABLE_BIT set (which we
    set only for clocksource=tsc, which in turn needs engaging via command
    line option).
    
    Fixes: a5d39947cb89 ("Allow guests to register secondary vcpu_time_info")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: b726541d94bd0a80b5864d17a2cd2e6d73a3fe0a
    master date: 2022-09-29 14:47:45 +0200
---
 xen/arch/x86/x86_64/domain.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index c46dccc25a..d51d993447 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -54,6 +54,26 @@ arch_compat_vcpu_op(
         break;
     }
 
+    case VCPUOP_register_vcpu_time_memory_area:
+    {
+        struct compat_vcpu_register_time_memory_area area = { .addr.p = 0 };
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.h, arg, 1) )
+            break;
+
+        if ( area.addr.h.c != area.addr.p ||
+             !compat_handle_okay(area.addr.h, 1) )
+            break;
+
+        rc = 0;
+        guest_from_compat_handle(v->arch.time_info_guest, area.addr.h);
+
+        force_update_vcpu_system_time(v);
+
+        break;
+    }
+
     case VCPUOP_get_physid:
         rc = arch_do_vcpu_op(cmd, v, arg);
         break;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:37:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:37:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430869.683084 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontha-0003WM-99; Thu, 27 Oct 2022 03:37:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430869.683084; Thu, 27 Oct 2022 03:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontha-0003WE-6e; Thu, 27 Oct 2022 03:37:06 +0000
Received: by outflank-mailman (input) for mailman id 430869;
 Thu, 27 Oct 2022 03:37:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthY-0003Vv-Tg
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthY-0000oW-T1
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthY-00056p-SO
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=TbrBF48nQw6f0rChKV1qeoTiUilDAxqhX8i5I0aZdGc=; b=rE9Im5lAWmqJC56Yjr1MON/WK3
	19SsXnWs69R5q2O+h/4kvz+Mi6mXdMssNMcB6jedSvmUKXllun5WQe+u9JFwntVzT4PMDkiKfuoLb
	QwE9SraKOzhwotcc4IKziAXJj10AK65eSMOYoT7CYLPy5kmuQMFVl/fDi1QRQsDsZJFw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] x86/vpmu: Fix race-condition in vpmu_load
Message-Id: <E1onthY-00056p-SO@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:37:04 +0000

commit 9690bb261d5fa09cb281e1fa124d93db7b84fda5
Author:     Tamas K Lengyel <tamas.lengyel@intel.com>
AuthorDate: Tue Oct 11 15:17:42 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:17:42 2022 +0200

    x86/vpmu: Fix race-condition in vpmu_load
    
    The vPMU code-bases attempts to perform an optimization on saving/reloading the
    PMU context by keeping track of what vCPU ran on each pCPU. When a pCPU is
    getting scheduled, checks if the previous vCPU isn't the current one. If so,
    attempts a call to vpmu_save_force. Unfortunately if the previous vCPU is
    already getting scheduled to run on another pCPU its state will be already
    runnable, which results in an ASSERT failure.
    
    Fix this by always performing a pmu context save in vpmu_save when called from
    vpmu_switch_from, and do a vpmu_load when called from vpmu_switch_to.
    
    While this presents a minimal overhead in case the same vCPU is getting
    rescheduled on the same pCPU, the ASSERT failure is avoided and the code is a
    lot easier to reason about.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: defa4e51d20a143bdd4395a075bf0933bb38a9a4
    master date: 2022-09-30 09:53:49 +0200
---
 xen/arch/x86/cpu/vpmu.c | 42 ++++--------------------------------------
 1 file changed, 4 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index fb1b296a6c..800eff87dc 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -364,58 +364,24 @@ void vpmu_save(struct vcpu *v)
     vpmu->last_pcpu = pcpu;
     per_cpu(last_vcpu, pcpu) = v;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v, 0) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
     apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
 }
 
 int vpmu_load(struct vcpu *v, bool_t from_guest)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return 0;
 
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
          (!has_vlapic(vpmu_vcpu(vpmu)->domain) &&
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:37:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:37:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430870.683088 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthj-0003ZN-B0; Thu, 27 Oct 2022 03:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430870.683088; Thu, 27 Oct 2022 03:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthj-0003ZE-82; Thu, 27 Oct 2022 03:37:15 +0000
Received: by outflank-mailman (input) for mailman id 430870;
 Thu, 27 Oct 2022 03:37:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthj-0003Z7-1D
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthj-0000oq-0X
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onthi-00057a-VI
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=cV5dVLubiKdrIb6q5R8oJbvmH0RUh7YPBLfCVOeRgWg=; b=WkPEQSVfuEwmgL0eEYo88WfsH8
	BSVgWvZik/ZnobnY0D/rO8uj3hki7Z5mPRWQkHvMo+rAmFhI2qtLJsuCCQQlgOOGfxrz5kCQn+vLZ
	l7atsWX7hRJvDjeV0prYmjUrdr52AG4/OdKcPMkkwNvPqvQ7LCUKKYv7vk4clvJ8g3Ek=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] tools/tests: fix wrong backport of upstream commit 52daa6a8483e4
Message-Id: <E1onthi-00057a-VI@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:37:14 +0000

commit 0d233924d4b0f676056856096e8761205add3ee8
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Wed Oct 12 17:31:44 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:31:44 2022 +0200

    tools/tests: fix wrong backport of upstream commit 52daa6a8483e4
    
    The backport of upstream commit 52daa6a8483e4 had a bug, correct it.
    
    Fixes: 3ac64b375183 ("xen/gnttab: fix gnttab_acquire_resource()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 tools/tests/resource/test-resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
index bf485baff2..51a8f4a000 100644
--- a/tools/tests/resource/test-resource.c
+++ b/tools/tests/resource/test-resource.c
@@ -71,7 +71,7 @@ static void test_gnttab(uint32_t domid, unsigned int nr_frames)
     res = xenforeignmemory_map_resource(
         fh, domid, XENMEM_resource_grant_table,
         XENMEM_resource_grant_table_id_status, 0, 1,
-        (void **)&gnttab, PROT_READ | PROT_WRITE, 0);
+        &addr, PROT_READ | PROT_WRITE, 0);
 
     if ( res )
     {
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:37:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:37:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430871.683091 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthu-0003ce-CA; Thu, 27 Oct 2022 03:37:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430871.683091; Thu, 27 Oct 2022 03:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onthu-0003cX-9X; Thu, 27 Oct 2022 03:37:26 +0000
Received: by outflank-mailman (input) for mailman id 430871;
 Thu, 27 Oct 2022 03:37:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontht-0003cO-49
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontht-0000ox-3U
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontht-00058M-2i
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+bF+wJYxLddYTfmLgNlB3bdmpR06GAbgolOxZztNZeA=; b=CPwYCBFq663n0kQq9Qw0erwynW
	otRz8kidRBZWBdX1jYme8l7bHN2XYeitu3DPZMA7GJqGKWq4nEW8xgoPHfEVfmHv1Qgwhd0ajZCFA
	9mAagtzi8fNMxr+o7EoR4n1b8ZzbpNUszodC9oAo4IegBSTDrvdmgffFYf2JwXXvSCb0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] libxl/Arm: correct xc_shadow_control() invocation to fix build
Message-Id: <E1ontht-00058M-2i@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:37:25 +0000

commit 816580afdd1730d4f85f64477a242a439af1cdf8
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:33:40 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:33:40 2022 +0200

    libxl/Arm: correct xc_shadow_control() invocation to fix build
    
    The backport didn't adapt to the earlier function prototype taking more
    (unused here) arguments.
    
    Fixes: c5215044578e ("xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_arm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index d21f614ed7..ba548befdd 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -132,14 +132,14 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+    unsigned long shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
 
     int r = xc_shadow_control(ctx->xch, domid,
                               XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                              &shadow_mb, 0);
+                              NULL, 0, &shadow_mb, 0, NULL);
     if (r) {
         LOGED(ERROR, domid,
-              "Failed to set %u MiB shadow allocation", shadow_mb);
+              "Failed to set %lu MiB shadow allocation", shadow_mb);
         return ERROR_FAIL;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:37:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:37:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430872.683096 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onti4-0003fW-E4; Thu, 27 Oct 2022 03:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430872.683096; Thu, 27 Oct 2022 03:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onti4-0003fO-B8; Thu, 27 Oct 2022 03:37:36 +0000
Received: by outflank-mailman (input) for mailman id 430872;
 Thu, 27 Oct 2022 03:37:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onti3-0003fA-7L
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onti3-0000p6-6Z
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onti3-000598-5n
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=w8wwR8UIN3SZ2G7ib7X48sGbVFKyBk1cqgmr6z2Zr5c=; b=hazV+5HynBrbuKtsyOHeTX6Olj
	hssk11Xa4ixzRJl6LldaBFE2kttguEBsrXcfFUKps+XKWj3BNdCoKIipG7rr8yEBVHLLrDyBATVe8
	upwGNMCfdquSJ6AVEhvQcKY1BRsO4xrtdrFL5adppKeUgaSK5cA+Mj9NBwp6eeNH3OSQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] arm/p2m: Rework p2m_init()
Message-Id: <E1onti3-000598-5n@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:37:35 +0000

commit 6f948fd1929c01b82a119f03670cab38ffebb47e
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:21:11 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:57:58 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c1055ff2a7..25eb1d84cb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1733,7 +1733,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1742,11 +1742,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1762,8 +1757,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1776,13 +1769,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 03:37:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 03:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430873.683100 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontiE-0003j0-H6; Thu, 27 Oct 2022 03:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430873.683100; Thu, 27 Oct 2022 03:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ontiE-0003is-EE; Thu, 27 Oct 2022 03:37:46 +0000
Received: by outflank-mailman (input) for mailman id 430873;
 Thu, 27 Oct 2022 03:37:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontiD-0003ih-AG
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontiD-0000pA-9X
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ontiD-00059v-8v
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 03:37:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=T04EgLODXP87/XR60/7oiROfXwLenImCDXhinu89YaI=; b=Eylu0lsEbpXv1tuW0wg+EIRFAc
	PP84eboZtcWChnurjUGBuC17pijDs2WRZPyEBTusfNZLDoUm26rh6r3oD8IdcE5PrevyaVMHKZiBB
	ibAv1NCH/KgsWFXorpgiPKKFKio72Wo9FOVnlUmSC7WJlFezB9A4wC0hLD3XXpRz1hi0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.15] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1ontiD-00059v-8v@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 03:37:45 +0000

commit f8915cd5dbe0f51e9bb31a54fe40600b839dd707
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:21:12 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:57:59 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index a5ffd952ec..b11359b8cc 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1041,7 +1041,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 25eb1d84cb..f6012f2a53 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1664,7 +1664,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1672,6 +1672,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1695,7 +1698,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1715,7 +1718,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1782,6 +1798,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 18675b2345..ea7ca41d82 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -194,14 +194,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -266,6 +270,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.15


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 09:33:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 09:33:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430977.683359 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzG3-0003qS-HW; Thu, 27 Oct 2022 09:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430977.683359; Thu, 27 Oct 2022 09:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzG3-0003qK-EI; Thu, 27 Oct 2022 09:33:03 +0000
Received: by outflank-mailman (input) for mailman id 430977;
 Thu, 27 Oct 2022 09:33:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzG1-0003pa-Lk
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzG1-0007qK-L3
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:01 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzG1-0005P9-K3
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:01 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=HYH/k3Q3IJRs+XFFfoua7v08+rha42H6bhRtyQLW+Ek=; b=0pHQULiCstktFubEBj40FPr2a3
	LSrXgrdrffUp+z+PnyerY/RycbbGVhgemzk+IMi49JEhNeY34iECW9Zdzp8SHaphaUNyb+04KzmdZ
	xuYh/VNXzYG6HnkoyDkhrOVN/wsdRztBqboUqc0KxVe32qlqn0A023Z3ujkEC+P9IpQ8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/shadow: drop (replace) bogus assertions
Message-Id: <E1onzG1-0005P9-K3@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 09:33:01 +0000

commit a92dc2bb30ba65ae25d2f417677eb7ef9a6a0fef
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 24 15:46:11 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 24 15:46:11 2022 +0200

    x86/shadow: drop (replace) bogus assertions
    
    The addition of a call to shadow_blow_tables() from shadow_teardown()
    has resulted in the "no vcpus" related assertion becoming triggerable:
    If domain_create() fails with at least one page successfully allocated
    in the course of shadow_enable(), or if domain_create() succeeds and
    the domain is then killed without ever invoking XEN_DOMCTL_max_vcpus.
    Note that in-tree tests (test-resource and test-tsx) do exactly the
    latter of these two.
    
    The assertion's comment was bogus anyway: Shadow mode has been getting
    enabled before allocation of vCPU-s for quite some time. Convert the
    assertion to a conditional: As long as there are no vCPU-s, there's
    nothing to blow away.
    
    Fixes: e7aa55c0aab3 ("x86/p2m: free the paging memory pool preemptively")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    
    A similar assertion/comment pair exists in _shadow_prealloc(); the
    comment is similarly bogus, and the assertion could in principle trigger
    e.g. when shadow_alloc_p2m_page() is called early enough. Replace those
    at the same time by a similar early return, here indicating failure to
    the caller (which will generally lead to the domain being crashed in
    shadow_prealloc()).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/mm/shadow/common.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index d985d51614..badfd53c6b 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -943,8 +943,9 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
         /* No reclaim when the domain is dying, teardown will take care of it. */
         return false;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to reclaim when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return false;
 
     /* Stage one: walk the list of pinned pages, unpinning them */
     perfc_incr(shadow_prealloc_1);
@@ -1034,8 +1035,9 @@ void shadow_blow_tables(struct domain *d)
     mfn_t smfn;
     int i;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to do when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return;
 
     /* Pass one: unpin all pinned pages */
     foreach_pinned_shadow(d, sp, t)
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 09:33:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 09:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430980.683362 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzGD-0003tk-Is; Thu, 27 Oct 2022 09:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430980.683362; Thu, 27 Oct 2022 09:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzGD-0003te-GB; Thu, 27 Oct 2022 09:33:13 +0000
Received: by outflank-mailman (input) for mailman id 430980;
 Thu, 27 Oct 2022 09:33:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGB-0003tF-Oc
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGB-0007qS-Ny
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:11 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGB-0005Ph-NE
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:11 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=leyoagHUa8HFmHw0Xyg9F2r1sEHW9PJDPli4x8w917Y=; b=U7rA+AVkt5XqaprzT686cuDuX2
	9V71pxoBAVTuIO/xn79D3UFaJupUQ29mQxEkDxaQf2HRdfDJog8sP4+TxbFCfl/M3eFkYxaAjESnt
	pa11+NB4RzrYAFCn5fNqqnUQwaeWEXOWBiSLCZWf2O/68r9NWC7k+aS50dR4GkY2JKxQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] libs/light: Fix build, fix missing _libxl_types_json.h
Message-Id: <E1onzGB-0005Ph-NE@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 09:33:11 +0000

commit 4ff0811a2b0d1c715f54550f9a3632195bb6b21f
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Tue Oct 25 12:16:32 2022 +0100
Commit:     Andrew Cooper <andrew.cooper3@citrix.com>
CommitDate: Tue Oct 25 13:36:40 2022 +0100

    libs/light: Fix build, fix missing _libxl_types_json.h
    
    Make may not have copied "_libxl_types_json.h" into $(XEN_INCLUDE)
    before starting to build the different objects.
    
    Make sure that the generated headers are copied into $(XEN_INCLUDE)
    before using them. This is achieved by telling make about which
    headers are needed to use "libxl_internal.h" which use "libxl_json.h"
    which uses "_libxl_types_json.h". "libxl_internal.h" also uses
    "libxl.h" so add it to the list.
    
    This also prevent `gcc` from using a potentially installed headers
    from a previous version of Xen.
    
    Reported-by: Per Bilse <per.bilse@citrix.com>
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 tools/libs/light/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index d681269229..374be1cfab 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -209,6 +209,7 @@ _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
 
 $(XEN_INCLUDE)/libxl.h: $(XEN_INCLUDE)/_libxl_types.h
 $(XEN_INCLUDE)/libxl_json.h: $(XEN_INCLUDE)/_libxl_types_json.h
+libxl_internal.h: $(XEN_INCLUDE)/libxl.h $(XEN_INCLUDE)/libxl_json.h
 libxl_internal.h: _libxl_types_internal.h _libxl_types_private.h _libxl_types_internal_private.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 09:33:22 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 09:33:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430981.683366 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzGM-0003ww-KA; Thu, 27 Oct 2022 09:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430981.683366; Thu, 27 Oct 2022 09:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzGM-0003wp-Hb; Thu, 27 Oct 2022 09:33:22 +0000
Received: by outflank-mailman (input) for mailman id 430981;
 Thu, 27 Oct 2022 09:33:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGL-0003wT-SF
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGL-0007qv-RV
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:21 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGL-0005Q6-Q3
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:21 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=EKL4QxH8fgqUD9he31KjK4t58gC/nytcObFsQQOVHfQ=; b=c2FTWP84i3PrUrufREXktoQJ8O
	dLVHv2niKEh4H3D9Rg2muAgmlnfy7cGWGPG7rT+S/RcIYh3ls+/f4RM/aY88YlIXqlpKDl0IhwIPA
	6VdM/iJYkJXHVoiM1/12Nersk3nNsUxEiTnnAgpAOKbhwywsOHq5hsNtTkNtKhszltTw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] automation: Explicitly enable NULL scheduler for boot-cpupools test
Message-Id: <E1onzGL-0005Q6-Q3@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 09:33:21 +0000

commit aef07fd1868455e572b46b3e88e2679414b07214
Author:     Michal Orzel <michal.orzel@amd.com>
AuthorDate: Mon Oct 24 14:04:43 2022 +0200
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Tue Oct 25 15:40:46 2022 -0700

    automation: Explicitly enable NULL scheduler for boot-cpupools test
    
    NULL scheduler is not enabled by default on non-debug Xen builds. This
    causes the boot time cpupools test to fail on such build jobs. Fix the issue
    by explicitly specifying the config options required to enable the NULL
    scheduler.
    
    Fixes: 36e3f4158778 ("automation: Add a new job for testing boot time cpupools on arm64")
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 automation/gitlab-ci/build.yaml | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index ddc2234faf..716ee0b1e4 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -582,6 +582,9 @@ alpine-3.12-gcc-arm64-boot-cpupools:
   variables:
     CONTAINER: alpine:3.12-arm64v8
     EXTRA_XEN_CONFIG: |
+      CONFIG_EXPERT=y
+      CONFIG_UNSUPPORTED=y
+      CONFIG_SCHED_NULL=y
       CONFIG_BOOT_TIME_CPUPOOLS=y
 
 ## Test artifacts common
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 09:33:32 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 09:33:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.430982.683371 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzGW-00040b-Li; Thu, 27 Oct 2022 09:33:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 430982.683371; Thu, 27 Oct 2022 09:33:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzGW-00040T-J3; Thu, 27 Oct 2022 09:33:32 +0000
Received: by outflank-mailman (input) for mailman id 430982;
 Thu, 27 Oct 2022 09:33:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGV-00040G-VG
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGV-0007r6-Ub
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:31 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzGV-0005QV-Ti
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:33:31 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=WJh2bms6agZDaWESSqGopxZ0k89aYUpjsULrDwjXNtM=; b=nZjF6iuEyujpVL/W0bc38Gvhz9
	9IwKKR/voVvLgXtfnC+yE2kcQ96Y/RS3lbfJZy4Ty25uU/NX0kVqUkOnamzET8VrK42iNF0GC4beJ
	VI6AoOjOM42+QA00zMtp9mKbCE7rN3fhNpQMnJtrUbu3iGiZFMFfC8mtu9jt6iRehvyA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] automation: Build Xen according to the type of the job
Message-Id: <E1onzGV-0005QV-Ti@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 09:33:31 +0000

commit ef9cc669ba157f9e71fd79722ee43892e7304604
Author:     Michal Orzel <michal.orzel@amd.com>
AuthorDate: Fri Oct 21 15:22:38 2022 +0200
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Tue Oct 25 15:41:30 2022 -0700

    automation: Build Xen according to the type of the job
    
    All the build jobs exist in two flavors: debug and non-debug, where the
    former sets 'debug' variable to 'y' and the latter to 'n'. This variable
    is only being recognized by the toolstack, because Xen requires
    enabling/disabling debug build via e.g. menuconfig/config file.
    As a corollary, we end up building/testing Xen with CONFIG_DEBUG always
    set to a default value ('y' for unstable and 'n' for stable branches),
    regardless of the type of the build job.
    
    Fix this behavior by setting CONFIG_DEBUG according to the 'debug' value.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 automation/scripts/build | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index 8c0882f3aa..a593419063 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -21,12 +21,13 @@ if [[ "${RANDCONFIG}" == "y" ]]; then
     make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
     hypervisor_only="y"
 else
+    echo "CONFIG_DEBUG=${debug}" > xen/.config
+
     if [[ -n "${EXTRA_XEN_CONFIG}" ]]; then
-        echo "${EXTRA_XEN_CONFIG}" > xen/.config
-        make -j$(nproc) -C xen olddefconfig
-    else
-        make -j$(nproc) -C xen defconfig
+        echo "${EXTRA_XEN_CONFIG}" >> xen/.config
     fi
+
+    make -j$(nproc) -C xen olddefconfig
 fi
 
 # Save the config file before building because build failure causes the script
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 09:55:09 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 09:55:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431008.683431 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzbN-0000y4-QP; Thu, 27 Oct 2022 09:55:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431008.683431; Thu, 27 Oct 2022 09:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzbN-0000xb-N3; Thu, 27 Oct 2022 09:55:05 +0000
Received: by outflank-mailman (input) for mailman id 431008;
 Thu, 27 Oct 2022 09:55:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbN-0000pG-6X
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbN-0008D7-3D
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbN-0006wP-27
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=EpZj2IHUUV9SyHklIfOmuGB9dzRyPy+z3dK9I8LTfqo=; b=5vayoK16zLgwx3Xe9LXVZdmsnY
	02CTmv5tFglCI+DN2o6jlE5/0vRomM+OSqyQ3mS8mFHlBrw6JUh5ZpLHjoxrS0/W30jGmqMJqMB6d
	d0+pr8Jx8RyfOITcXwIWsW/gdEXdZArb0Gb0PKdWHYejAfKD4HcgxYuJba45rY2Lg85k=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] vpci: include xen/vmap.h to fix build on ARM
Message-Id: <E1onzbN-0006wP-27@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 09:55:05 +0000

commit 2ca833688abd4ce88f8eba06ee98c08d35d2d486
Author:     Volodymyr Babchuk <volodymyr_babchuk@epam.com>
AuthorDate: Thu Oct 27 11:48:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 27 11:48:36 2022 +0200

    vpci: include xen/vmap.h to fix build on ARM
    
    Patch b4f211606011 ("vpci/msix: fix PBA accesses") introduced call to
    iounmap(), but not added corresponding include.
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/vpci.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 98198dc2c9..6d48d496bb 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -19,6 +19,7 @@
 
 #include <xen/sched.h>
 #include <xen/vpci.h>
+#include <xen/vmap.h>
 
 /* Internal struct to store the emulated PCI registers. */
 struct vpci_register {
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 09:55:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 09:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431009.683435 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzbX-00013s-Rl; Thu, 27 Oct 2022 09:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431009.683435; Thu, 27 Oct 2022 09:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzbX-00013l-Oe; Thu, 27 Oct 2022 09:55:15 +0000
Received: by outflank-mailman (input) for mailman id 431009;
 Thu, 27 Oct 2022 09:55:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbX-00013f-73
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbX-0008DZ-6J
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbX-0006x1-5P
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=1kDKGPGIvU+lZ2OESSN5FXPwROegQOEQp+/S7lwRLBg=; b=iO3gj7XCQi8wM8m/QYy4j8WoMH
	D6U51Qp+urNQsI+GfzmjVcvzSZlp52uVOGz2m9AOgm8jlNOsy2uGONCgduGk+9DyL0AK093MgxATb
	LL0+dXsbdQtFtP74zUeb2p8zORY1c4A0NdNMj8wR+jFcCFnoWOSN82Q3Zc1aI5CLe1p8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86: also zap secondary time area handles during soft reset
Message-Id: <E1onzbX-0006x1-5P@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 09:55:15 +0000

commit b80d4f8d2ea6418e32fb4f20d1304ace6d6566e3
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Thu Oct 27 11:49:09 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 27 11:49:09 2022 +0200

    x86: also zap secondary time area handles during soft reset
    
    Just like domain_soft_reset() properly zaps runstate area handles, the
    secondary time area ones also need discarding to prevent guest memory
    corruption once the guest is re-started.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/domain.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a5d2d66852..ce82c502bb 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -951,6 +951,7 @@ int arch_domain_soft_reset(struct domain *d)
     struct page_info *page = virt_to_page(d->shared_info), *new_page;
     int ret = 0;
     struct domain *owner;
+    struct vcpu *v;
     mfn_t mfn;
     gfn_t gfn;
     p2m_type_t p2mt;
@@ -1030,7 +1031,12 @@ int arch_domain_soft_reset(struct domain *d)
                "Failed to add a page to replace %pd's shared_info frame %"PRI_gfn"\n",
                d, gfn_x(gfn));
         free_domheap_page(new_page);
+        goto exit_put_gfn;
     }
+
+    for_each_vcpu ( d, v )
+        set_xen_guest_handle(v->arch.time_info_guest, NULL);
+
  exit_put_gfn:
     put_gfn(d, gfn_x(gfn));
  exit_put_page:
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 09:55:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 09:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431010.683438 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzbh-00016v-UA; Thu, 27 Oct 2022 09:55:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431010.683438; Thu, 27 Oct 2022 09:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1onzbh-00016n-Rd; Thu, 27 Oct 2022 09:55:25 +0000
Received: by outflank-mailman (input) for mailman id 431010;
 Thu, 27 Oct 2022 09:55:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbh-00016g-A3
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbh-0008Dj-9I
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1onzbh-0006xd-8R
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 09:55:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=gPd2dybpv7XQSUZm1XXzQITc4W45ZHXHN/5D2oYtnfA=; b=mKsmID62TvGU0Czkxzx1wbBzGi
	Qxcqimoe6AJoav4Wiw9xmsAz6TUG18+op7PGyQThAXq2vun/tHXYqxwDJKcpJIIMbJx4EkNPWOuOn
	XeqiiUbWWqlQhSeL627wAd1yVecmvqDUjHZ/zZosUa4kxtjWflCeb7BHBfyGA8wTP2N0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] Arm32: prune (again) ld warning about mismatched wchar_t sizes
Message-Id: <E1onzbh-0006xd-8R@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 09:55:25 +0000

commit 20cf0ab774e828dc4e75ecebecf56b53aca754fc
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Thu Oct 27 11:50:47 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 27 11:50:47 2022 +0200

    Arm32: prune (again) ld warning about mismatched wchar_t sizes
    
    The name change (stub.c -> common-stub.c) rendered the earlier
    workaround (commit a4d4c541f58b ["xen/arm32: avoid EFI stub wchar_t size
    linker warning"]) ineffectual.
    
    Fixes: bfd3e9945d1b ("build: fix x86 out-of-tree build without EFI")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/efi/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/efi/Makefile b/xen/arch/arm/efi/Makefile
index 2459cbae3a..74b7274bdd 100644
--- a/xen/arch/arm/efi/Makefile
+++ b/xen/arch/arm/efi/Makefile
@@ -6,6 +6,6 @@ obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
 else
 obj-y += common-stub.o
 
-$(obj)/stub.o: CFLAGS-y += -fno-short-wchar
+$(obj)/common-stub.o: CFLAGS-y += -fno-short-wchar
 
 endif
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:22:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431156.683772 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Vz-0005xe-Bv; Thu, 27 Oct 2022 18:22:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431156.683772; Thu, 27 Oct 2022 18:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Vz-0005xW-8h; Thu, 27 Oct 2022 18:22:03 +0000
Received: by outflank-mailman (input) for mailman id 431156;
 Thu, 27 Oct 2022 18:22:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Vy-0005xQ-Ek
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Vy-0000gO-Ds
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Vy-0003eO-Cv
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rwSJFhzy3tXGbE57U7890YRt1fVimUQ2Yq4A53dbrOk=; b=dfePhjYM0SFiAwLlCmCbRh4XM/
	imiRgpgDIvPmquIWVO41ieY7cpnP8ibUJI5RofYWesqDdPzgcRMv3935E/RyE6Iw2uSb03hAmsQTj
	l2Z7Cgo7QofPM/SFX787ItucRNqa6PRnq8tJIWdciujqg/YFGbX/Zonc9NUoBu2zlQd0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oo7Vy-0003eO-Cv@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:22:02 +0000

commit 7a7406ba1d8912719eb7c9eec2d7cd34f49dfac0
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:32:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:32:58 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2290b7114f..35943589fc 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1085,6 +1085,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1579,6 +1588,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:22:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431157.683776 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7W9-0005ze-D9; Thu, 27 Oct 2022 18:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431157.683776; Thu, 27 Oct 2022 18:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7W9-0005zW-AL; Thu, 27 Oct 2022 18:22:13 +0000
Received: by outflank-mailman (input) for mailman id 431157;
 Thu, 27 Oct 2022 18:22:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7W8-0005zI-J2
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7W8-0000gS-IG
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7W8-0003en-GM
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=xA7s9Rs2gxB87kKNfCNpYlmxI3JIXDHMBXAPuzSs5MU=; b=6Dh8bGEodQNMsDaw6BEwL2D7Nv
	epS2gZoZQaYle51bLK0V+Ne4LMvNy9miNwXWSB8u6h5IYpgi5XRiEwj0NfjxWmERtfWAo5NP3ok1Y
	bVhC+PHz37g/i+HJQrfQiHIgYDEJXOdGOSIInJjSNSpJUCYLa4zUXZDcFhz2k0sxpfWs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oo7W8-0003en-GM@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:22:12 +0000

commit 9c975e636ed2782d4fd8b2b76126bdfb81f386cc
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:34:25 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:34:25 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c     | 10 ++++++++--
 xen/arch/arm/p2m.c        | 47 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/p2m.h | 13 +++++++++++--
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 849fef2f1e..caa625bd16 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -774,10 +774,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -979,6 +979,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1029,6 +1030,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 35943589fc..62f4d31dc1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1496,17 +1496,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index ea8a03449d..f40f82794d 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -183,8 +183,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:22:24 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431158.683781 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7WK-00062h-Eu; Thu, 27 Oct 2022 18:22:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431158.683781; Thu, 27 Oct 2022 18:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7WK-00062Z-Bw; Thu, 27 Oct 2022 18:22:24 +0000
Received: by outflank-mailman (input) for mailman id 431158;
 Thu, 27 Oct 2022 18:22:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7WI-00062L-MR
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7WI-0000h8-Li
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7WI-0003fI-Kp
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=6sqYEKT6S/qBbSXbvw4LOpdY3k9xC+3GXFWPR/3eozY=; b=dOeg8E+5bPcZq0HP+i/eyxTs+L
	MaA4CHUXV7qWPCvkF3YOVauLpUxsZAHksAOUJAiBZx9xw1l1H9BStejT6jEdBUhRSjxWMHWVogWwd
	OqXkRhgcjYSjSDcNaoijwopW2XVCYkqNldOdkt6aHSj/vzdUMIfsPI0JSCsWMcY4mCwI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oo7WI-0003fI-Kp@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:22:22 +0000

commit 54b6eab0e4450a39ebe11b8f2faeaeb09c6e774a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:34:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:34:41 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 1349de01d4..395fd32559 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -540,18 +540,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index be5e9c031a..7ec6466922 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,11 +737,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -751,10 +751,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 773777321f..4436ea2c51 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2686,7 +2686,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2839,7 +2839,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 9be4a9c58e..cfe2e55fcf 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -595,7 +595,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:22:34 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:22:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431159.683785 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7WU-00065v-Gj; Thu, 27 Oct 2022 18:22:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431159.683785; Thu, 27 Oct 2022 18:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7WU-00065l-DZ; Thu, 27 Oct 2022 18:22:34 +0000
Received: by outflank-mailman (input) for mailman id 431159;
 Thu, 27 Oct 2022 18:22:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7WS-00065F-Qe
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7WS-0000hO-OP
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7WS-0003fj-No
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Bp+6ch6cYT7X9yPE6uI28DZbAriS+FCXNCeqFbUqDNc=; b=KId0/OYB6l2cKvdywcUvzpIGjn
	OO5cVKlrMUmt15he6UzboCPe69PWxhoveTBDYvl7auuRRvB7dJpEuR/abS5nDy0JKx5HQMHuRRPkm
	Qn8Jp7cXXfY9HUZtuCP2sm72bDOx8DBH9VaZ1xa+XekbhoDcHT9pTn0NGSnRdWfshcog=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oo7WS-0003fj-No@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:22:32 +0000

commit 3163e34f6abad70160711ef60c21645355f509fb
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:34:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:34:59 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 395fd32559..3d626fe149 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -404,8 +405,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -758,6 +764,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -766,6 +775,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:22:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431160.683787 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7We-00068S-HZ; Thu, 27 Oct 2022 18:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431160.683787; Thu, 27 Oct 2022 18:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7We-00068L-F6; Thu, 27 Oct 2022 18:22:44 +0000
Received: by outflank-mailman (input) for mailman id 431160;
 Thu, 27 Oct 2022 18:22:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wc-000687-TH
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wc-0000hY-Sa
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wc-0003gA-Qx
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rzRhvn6mO3JxRSaTr1FE8+i7+ubte8KBZr/saDNQ5qU=; b=JP8JGDiJpO1KnL8WFUwoYrkX9u
	iJrR3EfG7PNXyx4615kzQ8mBF3Oq4a4iiiQ1GM+/u3Jx1NdiKWnkyGn6tZMVUKg+PGHZkP03kMqxy
	m+Rr+aFnpqydLfmQxcjfCZm81nq7u+3fxBX2KxBUQivbAW1RgjzzCHWONVgIeepiFUB8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oo7Wc-0003gA-Qx@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:22:42 +0000

commit 0bab3abf73783da66af8cf7cf7aabb7d86caa035
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:35:43 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:35:43 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/multi.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 99e410d999..c129b8103e 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3854,6 +3854,7 @@ sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
@@ -4007,6 +4008,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
      * current values of the guest's four l3es. */
@@ -4052,6 +4058,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[0]);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:22:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:22:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431162.683791 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Wo-0006BY-Ji; Thu, 27 Oct 2022 18:22:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431162.683791; Thu, 27 Oct 2022 18:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Wo-0006BQ-Gb; Thu, 27 Oct 2022 18:22:54 +0000
Received: by outflank-mailman (input) for mailman id 431162;
 Thu, 27 Oct 2022 18:22:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wn-0006BC-1P
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wn-0000hi-0f
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wm-0003gZ-VJ
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:22:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=OrnWCpoPyhw5HbTTRd+UTEkNXIUJAM1SFpvebHzXKt0=; b=lFi5V692QbRn8ZxTKpgZ7ysUnO
	lRVrVo9EYxXyCEpGk3DItDbInF7m0hFXnrKHdhgaHa2TH+axYmjqyQKbXl8+1+jYNFrkVLgVJkyCr
	tO0hLnl47voqxSxvf/N5QLLLpXiF8HHLoPhdYUWqXA3iJSTK/bbb4Lhcg0SXenC8VkW8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oo7Wm-0003gZ-VJ@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:22:52 +0000

commit b8f4a5de683efbe402db65483d845573c30dbb3f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:36:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:36:21 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 62 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/multi.c   | 21 ++++++++++----
 xen/arch/x86/mm/shadow/private.h |  3 +-
 3 files changed, 65 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 4436ea2c51..6f71636746 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/shadow.h>
 #include <asm/hvm/ioreq.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -927,14 +928,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -950,7 +952,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -971,7 +974,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -984,7 +987,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -992,9 +1000,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1211,7 +1229,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1229,16 +1247,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1329,7 +1349,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2397,12 +2419,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2464,6 +2487,10 @@ static void sh_update_paging_modes(struct vcpu *v)
         if ( pagetable_is_null(v->arch.hvm.monitor_table) )
         {
             mfn_t mmfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2501,6 +2528,11 @@ static void sh_update_paging_modes(struct vcpu *v)
                 old_mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    old_mode->shadow.destroy_monitor_table(v, old_mfn);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index c129b8103e..aaf56d295e 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1535,7 +1535,8 @@ sh_make_monitor_table(struct vcpu *v)
     ASSERT(pagetable_get_pfn(v->arch.hvm.monitor_table) == 0);
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
 
     {
         mfn_t m4mfn;
@@ -3067,9 +3068,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
@@ -3864,7 +3870,12 @@ sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = sh_make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 3fd3f0617a..e2100f0f34 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -351,7 +351,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:23:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431163.683795 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Wy-0006En-Mg; Thu, 27 Oct 2022 18:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431163.683795; Thu, 27 Oct 2022 18:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Wy-0006Ef-Jg; Thu, 27 Oct 2022 18:23:04 +0000
Received: by outflank-mailman (input) for mailman id 431163;
 Thu, 27 Oct 2022 18:23:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wx-0006EK-4O
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wx-0000iA-3c
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Wx-0003hG-31
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=x2TH+S7Inp2SEZmBcDg2jDWTC8hFdig/3HiTBIk32yI=; b=ux2DWyOOFPWdwcRbTACJ0SEOYs
	5Y8e0c/TrFPg+E/vKMEe0Yr1D8P3NSABuv48Kb2L3LG/ubO1EmTcLVoOYQr6eT5p3UiTrfLlvCWql
	hxEvMC866WD6cDuiE7kjAF5fKLJBwPZ11XinzHDp7QyNCSDz1GHpBJ5uI7Vpqo6biyRE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oo7Wx-0003hG-31@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:23:03 +0000

commit 9b5a7fd916a74295886a7d473c311e3c7e254e54
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:37:32 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:37:32 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d626fe149..7eeeb1f472 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -244,6 +244,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -280,7 +283,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 6f71636746..8eed7e72fe 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -938,6 +938,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -988,7 +992,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1002,10 +1006,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1231,6 +1238,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:23:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:23:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431164.683799 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7X8-0006Hd-O5; Thu, 27 Oct 2022 18:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431164.683799; Thu, 27 Oct 2022 18:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7X8-0006HW-LE; Thu, 27 Oct 2022 18:23:14 +0000
Received: by outflank-mailman (input) for mailman id 431164;
 Thu, 27 Oct 2022 18:23:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7X7-0006HI-7T
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7X7-0000iK-6q
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7X7-0003hh-62
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rjj7FcWEofpdzAf4rVpbaZVgEuCJtTooE/fv3+pmgVg=; b=UCeTQbaZHqxD19vf5L/NA1Vbd3
	yk23BVLbgODeCgqWF/+3CekQmypEP2cvmQaXFO98F2FdzLKyt81SiDpVFWCUQwdJXP1AWA3ADjF1T
	DSnKLyYdxjuYZDlTj8OGX4fH0sdD5m9TbT09qmK92SY4QAV5bvCAtAULjHHso36ys6qY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oo7X7-0003hh-62@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:23:13 +0000

commit fc1098471822d80a35c6f1ac1ec8c7b45caf6eab
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:38:09 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:38:09 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 7eeeb1f472..febd47e32d 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -264,6 +264,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8eed7e72fe..730c82dcb1 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1180,6 +1180,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1222,11 +1223,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1296,9 +1318,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:23:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:23:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431165.683803 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7XI-0006KC-PT; Thu, 27 Oct 2022 18:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431165.683803; Thu, 27 Oct 2022 18:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7XI-0006K4-Ms; Thu, 27 Oct 2022 18:23:24 +0000
Received: by outflank-mailman (input) for mailman id 431165;
 Thu, 27 Oct 2022 18:23:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7XH-0006Jr-At
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7XH-0000if-AE
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7XH-0003i6-9L
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=BTL6JT4cRAr0eUFLjAX4Nhfr/Q+YObPbDQYg6kPGKJU=; b=ksi45cMmeGMhuE/py+PdtSa2h/
	uQF9NEIAqIJgHvaHs4efeLbL3xiU0q1NDBVNwmrRowEcnHFeMIyVd9vqUbWEy8LbaNrRHFdFuIOZE
	p+bcxipklj4YI8OTzby4DauP1WRDcb+OHrxD8F3WBc3POjHyDnS7qA+D3Q6S1vDhSOSs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oo7XH-0003i6-9L@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:23:23 +0000

commit f90615ce03c14b5288bdacd796ada23b4e9d0f7b
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:38:30 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:38:30 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 46 +++++++++++++++++++++++++++--------------
 xen/arch/x86/mm/shadow/common.c | 16 ++++++++++++++
 3 files changed, 46 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 3658e50d56..4fb78d38e7 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2120,12 +2119,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index febd47e32d..be46d6e01f 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -545,24 +546,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -577,6 +562,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -584,6 +571,7 @@ void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
     mfn_t mfn;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -605,6 +593,32 @@ void hap_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
     if ( d->arch.paging.hap.total_pages != 0 )
     {
         hap_set_allocation(d, 0, preempted);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 730c82dcb1..bedb779ca4 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2795,6 +2795,19 @@ void shadow_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2913,6 +2926,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:23:34 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:23:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431166.683807 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7XS-0006N4-R4; Thu, 27 Oct 2022 18:23:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431166.683807; Thu, 27 Oct 2022 18:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7XS-0006Mw-OO; Thu, 27 Oct 2022 18:23:34 +0000
Received: by outflank-mailman (input) for mailman id 431166;
 Thu, 27 Oct 2022 18:23:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7XR-0006MV-EF
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7XR-0000iq-DZ
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7XR-0003if-Co
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Eycf/Ro8fEHhuNWtBvOaJeCfBq2KaWgl1JMfDI89u6c=; b=gaCc/adTRi/8jdRf79aw08/oU+
	K5yDDIb0HW56EEeAiOUxq251uzwO3MmqKqv4yDf/TMjxIY6pDlYWoRhddyoH1bcztlTUjHknmzwcS
	P3rE5idtY3hxKvcXUToM5IVPT2dg7jrmXR+Kp0p/WmUdcdqqUtmixTzrSmo/0wgHhuzw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oo7XR-0003if-Co@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:23:33 +0000

commit 804f83bfba8e73ed99a2f839c6731fa2aa9fb7bb
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:38:43 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:38:43 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index be46d6e01f..406c237eed 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -547,17 +547,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -608,14 +608,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 7ec6466922..39cfce47a3 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,12 +737,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -761,8 +762,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index bedb779ca4..ba2ef80778 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2749,8 +2749,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2797,7 +2801,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
 
     paging_unlock(d);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2916,7 +2922,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index cfe2e55fcf..3136fcb040 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -595,7 +595,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:23:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:23:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431167.683813 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Xc-0006QH-TI; Thu, 27 Oct 2022 18:23:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431167.683813; Thu, 27 Oct 2022 18:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Xc-0006Q3-Q2; Thu, 27 Oct 2022 18:23:44 +0000
Received: by outflank-mailman (input) for mailman id 431167;
 Thu, 27 Oct 2022 18:23:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xb-0006Pi-HL
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xb-0000j8-Gk
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xb-0003j4-G2
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=cDT21+K2+fuTNqAgM0crb2FCeEyfiA764zvMhPuG7yw=; b=pJJuXkjFcVEShWji5meVPutStU
	I+Kxnl0X6VtFK0EQv+4dJd/ooYQ3zOYJWrm2ypYfkz5AGlrYSLEnnbTlPL6Tv3tXI2wmbaJLBTMrf
	LhInz5TQkzlWYaj6H/Yslc+e0ayKh7Ihnk8fBhvEKNrhIHXQbF+XMQQOAUKa+jm29FMo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1oo7Xb-0003j4-G2@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:23:43 +0000

commit e3b66e5cba89fc0b59c9a116e7414388d45e04a0
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:39:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:39:00 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in  |  5 +++++
 tools/libxl/libxl_arch.h  |  4 ++++
 tools/libxl/libxl_arm.c   | 12 ++++++++++++
 tools/libxl/libxl_utils.c |  9 ++-------
 tools/libxl/libxl_x86.c   | 12 ++++++++++++
 5 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..2224080b30 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1803,6 +1803,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map. Users should adjust this value if bigger P2M pool size is
+needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 6a91775b9e..b09f868490 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -83,6 +83,10 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..f4b3dc8e71 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -153,6 +153,18 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index b039143b8a..e18b1524ef 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 07c7b05e0d..0ad455301d 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -852,6 +852,18 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
 
 /*
  * Local variables:
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:23:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:23:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431168.683816 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Xm-0006TT-Vn; Thu, 27 Oct 2022 18:23:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431168.683816; Thu, 27 Oct 2022 18:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Xm-0006TL-T5; Thu, 27 Oct 2022 18:23:54 +0000
Received: by outflank-mailman (input) for mailman id 431168;
 Thu, 27 Oct 2022 18:23:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xl-0006T5-KQ
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xl-0000jC-Jp
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xl-0003l0-J2
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:23:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=x9/W4LV+Nt8djU/r4gAK9NOL1SLomb6OY7LO1Zd8Kps=; b=QiRGk8U9CtVD9dSC5ezb/98TU0
	4Iz2UbS7kx4I58HBJc8FZ8sUglkmL+hxBROvHUobsJQul0jFd8T75pGa9v6MshSLDJqa+GsvQVrvW
	NNKO1R4bjWojgiZ3e7ljFNkb43poUeYbXAtOzkqbApGaoa78foGmGVFkPSNpLLhgBiTI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oo7Xl-0003l0-J2@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:23:53 +0000

commit fd688b06a57a327dc5dbda106a104a2af5e1aa2b
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:39:18 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:39:18 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 62f4d31dc1..0c331a36a5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -49,6 +49,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1568,7 +1654,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9c4db75f08..96a878d334 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -42,6 +42,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -53,6 +61,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index f40f82794d..b733f55d48 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -209,6 +209,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:24:05 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:24:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431169.683821 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Xx-0006WI-1l; Thu, 27 Oct 2022 18:24:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431169.683821; Thu, 27 Oct 2022 18:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Xw-0006W8-Uo; Thu, 27 Oct 2022 18:24:04 +0000
Received: by outflank-mailman (input) for mailman id 431169;
 Thu, 27 Oct 2022 18:24:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xv-0006Vq-Nh
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xv-0000jV-Ms
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Xv-0003le-MB
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=f6Yg8h5WTUOkJOkyi3Jk/GRX9nnnOH++SLH8riiOuhE=; b=uZx1PXkTHLKSgWMxhRmbH6taSL
	DEag+FCKGPJIm9uE8vRVJCtRrFf0HLKYo8KtZe+NM6XwRbDLVWHRYJFl0qgunc20JyVU28M10Ps2f
	7Ao4vp0cvBrUS+WZO0EBFA+ej06hSiGIobb2Mz+V/V24XzVd05KIg71oTS5Quq+IA208=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oo7Xv-0003le-MB@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:24:03 +0000

commit 4220eac3799f46ba84316513606a33e1ea33fb4e
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:42:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:42:00 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libxl/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c   | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index f4b3dc8e71..025df1bfd0 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -130,6 +130,18 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
 int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9da88b8c64..ef1299ae1c 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -45,11 +45,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:24:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431170.683826 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Y7-0006Z5-4N; Thu, 27 Oct 2022 18:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431170.683826; Thu, 27 Oct 2022 18:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Y7-0006Yw-09; Thu, 27 Oct 2022 18:24:15 +0000
Received: by outflank-mailman (input) for mailman id 431170;
 Thu, 27 Oct 2022 18:24:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Y5-0006Yl-Qt
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Y5-0000jf-Q5
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Y5-0003mI-PP
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=EF+RLGcuuJiuTzsEaD5m/J/lBSdRFbWETBlvTX8V7qI=; b=1PkiT6+nHAsf0EvBVTy9GPWuxp
	k7u5wrAY/TX9F1LGEFtJAmamqFKEBYT2c1Gks2iIagNVEypwuosn6QT8fZkyKfJSLNM+dnymSLXbp
	zsag4dg3lSc4XVylgTBpPVfrdV9ZFwfpGttgiCHoPktHrX4TPM5h+NCGAKI6a9/1LBkw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oo7Y5-0003mI-PP@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:24:13 +0000

commit 7d64fb52a57109147dd4180e3a3ba4b5e735a117
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:42:19 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:42:19 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..470c9491a7 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -164,6 +164,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index caa625bd16..aae615f7d6 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -980,6 +980,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1035,6 +1036,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f49dbf1ca1..3c05fa5ac7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2333,6 +2333,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2424,6 +2439,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2433,6 +2450,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index ef1299ae1c..dab3da3a23 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -48,6 +48,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -64,9 +67,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 0c331a36a5..13b06c0fe4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -49,6 +49,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -750,7 +798,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -870,7 +918,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -894,7 +942,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1610,7 +1658,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1634,6 +1682,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:24:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431171.683828 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7YI-0006bi-4q; Thu, 27 Oct 2022 18:24:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431171.683828; Thu, 27 Oct 2022 18:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7YI-0006ba-1g; Thu, 27 Oct 2022 18:24:26 +0000
Received: by outflank-mailman (input) for mailman id 431171;
 Thu, 27 Oct 2022 18:24:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7YF-0006bN-Tq
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7YF-0000k9-TG
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7YF-0003mn-SU
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=IfZvoDRuaZygy4SUcivM4uGsIjG+Q+MOfVK1ezsrE8Y=; b=hgnMa42RNjbCjxm/dxKg1h+30Y
	EOczXFOm12uDfQSrBOlI95f8KyC4/sNu6QnUdOWRx3ULlvv2zZXxKF+4K5Cyi1qb4ontTu/DTPVK0
	X39Xs8yL6Ea3G/mjN8MLuNj1C4mH3eezIKIrah7n7acTeBl0wsbaSvJt33bDbHAgzoTQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oo7YF-0003mn-SU@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:24:23 +0000

commit 6e5608d1c50e0f91ed3226489d9591c70fa37c30
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:42:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:42:48 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 34498d4652..576b1d34dc 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2617,9 +2617,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2656,11 +2655,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
-            fixup_status_for_copy_pin(rd, act, status);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
+            fixup_status_for_copy_pin(rd, act, status);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:24:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431172.683831 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7YR-0006f3-7c; Thu, 27 Oct 2022 18:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431172.683831; Thu, 27 Oct 2022 18:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7YR-0006ew-55; Thu, 27 Oct 2022 18:24:35 +0000
Received: by outflank-mailman (input) for mailman id 431172;
 Thu, 27 Oct 2022 18:24:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7YQ-0006ep-0i
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7YQ-0000kD-04
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7YP-0003nG-VO
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=3Hd8arBrKaI2m1E7wiSWn2xHUiBmEgS+fY37GLSjwMU=; b=V/OcvrVAVOixwKBfPQqlRNf4xJ
	1HnWIo7tycJ3YdxUsevzW7rjDqCVZ5HXmsEoslnp7KQthWFD97grs87LYF9WIzB9DmjEc2Ch3Gj0l
	IK2zonYdU5zQFwAY2q5WBBUe2YRfq2IDFjcZ4aUkurALyoCdR1pFLap3jAHrJCbI73i4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] libxl/Arm: correct xc_shadow_control() invocation to fix build
Message-Id: <E1oo7YP-0003nG-VO@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:24:33 +0000

commit 016de62747b26ead5a5c763b640fe8e205cd182b
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:36:03 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:36:03 2022 +0200

    libxl/Arm: correct xc_shadow_control() invocation to fix build
    
    The backport didn't adapt to the earlier function prototype taking more
    (unused here) arguments.
    
    Fixes: c5215044578e ("xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libxl/libxl_arm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 025df1bfd0..79cfb9cd29 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -131,14 +131,14 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+    unsigned long shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
 
     int r = xc_shadow_control(ctx->xch, domid,
                               XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                              &shadow_mb, 0);
+                              NULL, 0, &shadow_mb, 0, NULL);
     if (r) {
         LOGED(ERROR, domid,
-              "Failed to set %u MiB shadow allocation", shadow_mb);
+              "Failed to set %lu MiB shadow allocation", shadow_mb);
         return ERROR_FAIL;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:24:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:24:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431173.683835 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Yb-0006i8-9X; Thu, 27 Oct 2022 18:24:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431173.683835; Thu, 27 Oct 2022 18:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Yb-0006hr-6h; Thu, 27 Oct 2022 18:24:45 +0000
Received: by outflank-mailman (input) for mailman id 431173;
 Thu, 27 Oct 2022 18:24:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Ya-0006hi-3n
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Ya-0000kH-35
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Ya-0003nl-2K
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=l8L04LJzybyZL2hkUWdNcgUUsGErDlBZ03SGKaKj1x0=; b=VyHF2Qf9wy7aVEFQQ+Ru8b714q
	+MHlGyMF+yPUtnyBsdTDJK90cVf5MsH+gLYf583Y9qOtA9tFOnMmxJnJX1A675XYr9lsBsnQGYkWf
	/gNYiTMRuuwNebmD+Ttf8+R0bF3nmBu4aMQlhlAi33s1Z5zrq6TfHXogBkruP5QGd+PY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] arm/p2m: Rework p2m_init()
Message-Id: <E1oo7Ya-0003nl-2K@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:24:44 +0000

commit f25c377285d155d7d88cb0e4efad58f7fd8c9d4b
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:21:11 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:59:32 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 13b06c0fe4..2642d2748c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1699,7 +1699,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1708,11 +1708,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1728,8 +1723,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1742,13 +1735,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 18:24:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 18:24:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431174.683839 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Yl-0006kx-Am; Thu, 27 Oct 2022 18:24:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431174.683839; Thu, 27 Oct 2022 18:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo7Yl-0006kp-8B; Thu, 27 Oct 2022 18:24:55 +0000
Received: by outflank-mailman (input) for mailman id 431174;
 Thu, 27 Oct 2022 18:24:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Yk-0006kf-6j
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Yk-0000kL-65
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo7Yk-0003oA-5R
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 18:24:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Wxgk5pPWHnbv26XC1zF/9rjBKsAH0iYS34Hm4bTFAxI=; b=LrPtxUffGn/5sZOtQD0Kc0sArB
	/Zuj3O7TBYKxaDrDP8YToJ7+YDq2E4Hl+mMsK/ZTgfYpCASc1vwnLsP+Fun+0b8+adGBMg2pzg/r8
	lP5FLZa6v6Tsfje0VrgQPybmk4MzLSRtJhi7vSXNhUqmWtncT6skmTZ/AK9rmAZZDezk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.14] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1oo7Yk-0003oA-5R@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 18:24:54 +0000

commit 96220aec3e72b9d71600d78958b60e77db753b94
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:21:12 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:59:33 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index aae615f7d6..0fa1c0cb80 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1032,7 +1032,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2642d2748c..3eb6f16b30 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1630,7 +1630,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1638,6 +1638,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1661,7 +1664,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1681,7 +1684,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1748,6 +1764,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index b733f55d48..ac4edb95ce 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -185,14 +185,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -257,6 +261,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.14


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:55:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:55:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431204.683923 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9u3-0002GJ-GE; Thu, 27 Oct 2022 20:55:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431204.683923; Thu, 27 Oct 2022 20:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9u3-0002GB-Ck; Thu, 27 Oct 2022 20:55:03 +0000
Received: by outflank-mailman (input) for mailman id 431204;
 Thu, 27 Oct 2022 20:55:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9u1-0002Fe-Ta
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9u1-0003Hv-Rm
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:01 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9u1-0001ys-Q2
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:01 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=fxFqvYmQjDQknQaUvMFtW7UwJUE4RN9kk3YWKfndbxI=; b=umCa+wfRRzYgnEjNEr/jBd+/K+
	IC+VPvD6K5GNdW4yuHsjZU/yWgh5Fo2cTErqApqvJUbqpa5nZgC8gwt2bMdakLrKIaghfEUsteXwM
	GYIEfHfSV315U0p/0PSmVsAOqriybpTmaKaXRCloQwFY2+UOjhnaxFKiIfkrqcEBp2Uc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] update Xen version to 4.16.3-pre
Message-Id: <E1oo9u1-0001ys-Q2@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:55:01 +0000

commit 4aa32912ebeda8cb94d1c3941e7f1f0a2d4f921b
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:49:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:49:41 2022 +0200

    update Xen version to 4.16.3-pre
---
 xen/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/Makefile b/xen/Makefile
index 76d0a3ff25..8a403ee896 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -2,7 +2,7 @@
 # All other places this is stored (eg. compile.h) should be autogenerated.
 export XEN_VERSION       = 4
 export XEN_SUBVERSION    = 16
-export XEN_EXTRAVERSION ?= .2$(XEN_VENDORVERSION)
+export XEN_EXTRAVERSION ?= .3-pre$(XEN_VENDORVERSION)
 export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
 -include xen-version
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:55:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:55:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431210.683937 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9uE-0002gk-0D; Thu, 27 Oct 2022 20:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431210.683937; Thu, 27 Oct 2022 20:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9uD-0002ga-TM; Thu, 27 Oct 2022 20:55:13 +0000
Received: by outflank-mailman (input) for mailman id 431210;
 Thu, 27 Oct 2022 20:55:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uB-0002er-Vo
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uB-0003I7-V1
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:11 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uB-0001zV-UC
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:11 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=SHYTeWMuZpWBnzWXtVjP2SrCAu9SkGSz4xGBSWj2nAo=; b=jVnpz8NdsDJZvzyajSmm3CyAZG
	UZQQhDbQPq1hYlXb7mrCUDYNMon5g3yH4FLUrIXpvYMZS8qu65CBDBuxn/XkbcKezs3xEn+YFGAEf
	EI9RiwAZmgRsnjBVQeokZavu83dPrh/xnAmJRSW1HQ0rrvZPIiWREAHnpf5Cas75K3pU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1oo9uB-0001zV-UC@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:55:11 +0000

commit 8d9531a3421dad2b0012e09e6f41d5274e162064
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:52:13 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:13 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 3349b464a3..1affdafadb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1093,6 +1093,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1610,6 +1619,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:55:24 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:55:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431213.683942 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9uO-0002sg-2a; Thu, 27 Oct 2022 20:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431213.683942; Thu, 27 Oct 2022 20:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9uN-0002sW-Ut; Thu, 27 Oct 2022 20:55:23 +0000
Received: by outflank-mailman (input) for mailman id 431213;
 Thu, 27 Oct 2022 20:55:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uM-0002rd-2i
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uM-0003If-22
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uM-00020C-1A
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=lTN9LEEbGpONGuQQrweGeN9U+GFCCIye6vWpBkshWO4=; b=Dv+Cc9dGb1qUyPwES9WZSUmfW8
	4/o3JB8NQtvqZoRk4o5sZDtdu1WFwBw1dOxMs6rKn2ZYnb4K1z8wnTtCmzKWo8M8SffYrNt5v2qii
	esy8VP5PeO6nH1FRju09Op+1oIKSXDEYA7zfZD9Pqa729tUFhJwH9c/TY01GFd86pnUY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1oo9uM-00020C-1A@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:55:22 +0000

commit 937fdbad5180440888f1fcee46299103327efa90
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:52:27 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:27 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c     | 10 ++++++++--
 xen/arch/arm/p2m.c        | 47 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/p2m.h | 13 +++++++++++--
 3 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 96e1b23550..2694c39127 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -789,10 +789,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -996,6 +996,7 @@ enum {
     PROG_xen,
     PROG_page,
     PROG_mapping,
+    PROG_p2m,
     PROG_done,
 };
 
@@ -1056,6 +1057,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m):
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1affdafadb..27418ee5ee 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1527,17 +1527,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8f11d9c97b..b3ba83283e 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -192,8 +192,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:55:34 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:55:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431217.683945 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9uY-00038d-4z; Thu, 27 Oct 2022 20:55:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431217.683945; Thu, 27 Oct 2022 20:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9uY-00038W-26; Thu, 27 Oct 2022 20:55:34 +0000
Received: by outflank-mailman (input) for mailman id 431217;
 Thu, 27 Oct 2022 20:55:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uW-00035u-6P
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uW-0003Ip-5g
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uW-00020h-4r
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=PYgVf+sBConzCWTKGebPjTglwvH2hES+0qinzfHPjhw=; b=zNZt44xL0P0//8PGhT8d4bHgIx
	2mvaPvSlh6+l/0MxvtlArYCKpMeiFQpwTkOAKdOd2pKHPQrJBeITbc27VE6usN2Nfc56OJIVV/UxQ
	bMIQhXieqXap5/aGtF2iqNB9OJghCr4GhtaT0/H3UVJvLv27wWSKLJkQabNK7I9JN/OU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1oo9uW-00020h-4r@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:55:32 +0000

commit 8fc19c143b8aa563077f3d5c46fcc0a54dc04f35
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:52:39 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:39 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 47a7487fa7..a8f5a19da9 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -541,18 +541,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index def1695cf0..aba4f17cbe 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -749,11 +749,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -763,10 +763,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8c1b041f71..8c5baba954 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2701,7 +2701,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2866,7 +2866,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index f2af7a746c..c3c16748e7 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -574,7 +574,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:55:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:55:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431219.683950 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9ui-0003H6-6z; Thu, 27 Oct 2022 20:55:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431219.683950; Thu, 27 Oct 2022 20:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9ui-0003Gt-3l; Thu, 27 Oct 2022 20:55:44 +0000
Received: by outflank-mailman (input) for mailman id 431219;
 Thu, 27 Oct 2022 20:55:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9ug-0003GT-9N
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9ug-0003J1-8e
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9ug-00021F-83
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=pssZv97H47jhjbg4JtjQUlmLW5O9qt0NWtJ9LrES+2Y=; b=i8kYbSlTe3FbrATy/Evxamd73Z
	HGVTdrK41pIfinpoAbRSi7YADaTAiEEjPDtCQUApPRjt3pkv9NxkOfDJ13CUdeaAKu63DjZjoHc3k
	V9UOPFtIZqplxHwqzXU8JeUyZV9zFK7rFhvQTcqLNpTkuSNa3KB4rxRk6OmkznjbZgnY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/HAP: adjust monitor table related error handling
Message-Id: <E1oo9ug-00021F-83@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:55:42 +0000

commit 3422c19d85a3d23a9d798eafb739ffb8865522d2
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:52:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:52:59 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a8f5a19da9..d75dc2b9ed 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -766,6 +772,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.hvm.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -774,6 +783,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:55:53 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431220.683952 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9ur-0003KP-7j; Thu, 27 Oct 2022 20:55:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431220.683952; Thu, 27 Oct 2022 20:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9ur-0003KH-5A; Thu, 27 Oct 2022 20:55:53 +0000
Received: by outflank-mailman (input) for mailman id 431220;
 Thu, 27 Oct 2022 20:55:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uq-0003K6-Ch
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uq-0003J8-Bv
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:52 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9uq-00021l-B4
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:55:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=KRRUg4HpvKvhSZF8ldfngOqQG2I/fWPBiC3QIe+jhV0=; b=ICnCxTGKqKqKtM8mD/Imlyb3en
	RU9UKs8LgB+0yt7023hUHPDAyhJ8okDUhpfKuO/df4Hx+Nzpstk/Z3m0JoiRoOEeIp7iwGv+MHrJI
	SNBpo0bIvdTRZ7qomVMWj632rqY+zjrxZtoEsTuKIJ13u/yAoJKek0J6FRnS6q59cc0k=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1oo9uq-00021l-B4@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:55:52 +0000

commit 40e9daf6b56ae49bda3ba4e254ccf0e998e52a8c
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:53:12 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:53:12 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/common.c |  1 +
 xen/arch/x86/mm/shadow/multi.c  | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8c5baba954..00e520cbd0 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2516,6 +2516,7 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 7b8f4dd13b..2ff78fe336 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3312,6 +3312,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
@@ -3370,6 +3375,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.paging.shadow.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #else
 #error This should never happen
 #endif
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:56:03 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431221.683957 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9v1-0003OF-9e; Thu, 27 Oct 2022 20:56:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431221.683957; Thu, 27 Oct 2022 20:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9v1-0003O8-6p; Thu, 27 Oct 2022 20:56:03 +0000
Received: by outflank-mailman (input) for mailman id 431221;
 Thu, 27 Oct 2022 20:56:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9v0-0003Nu-GH
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9v0-0003JW-FW
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9v0-00022U-Ed
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=PxXNd6QPJP+b6Dt9XhU/dVLpzNA6CxihkwYuKn5VhLM=; b=tA1ZIx1W0kQ1ood6omMo6kLxkR
	qtqlRashXUnvQxxj0C0U4g4YMXqlRrDJx0fLp9cllzzn3bxh5p1X5eOr6hb3wRIypRqTrfVGQbY4+
	17q4g/eetZ6gPOvgUuLPPrydi/4IZFb3XrAQTNbRqxZ0sjQgEYHJahYEr5InPn8xdtMk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1oo9v0-00022U-Ed@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:56:02 +0000

commit 28d3f677ec97c98154311f64871ac48762cf980a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:53:27 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:53:27 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 69 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/hvm.c     |  4 ++-
 xen/arch/x86/mm/shadow/multi.c   | 11 +++++--
 xen/arch/x86/mm/shadow/private.h |  3 +-
 4 files changed, 66 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 00e520cbd0..2067c7d16b 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -928,14 +929,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -951,7 +953,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -974,7 +977,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     guest_flush_tlb_mask(d, d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -987,7 +990,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -995,9 +1003,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1218,7 +1236,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1236,16 +1254,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1336,7 +1356,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2334,12 +2356,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2403,6 +2426,9 @@ static void sh_update_paging_modes(struct vcpu *v)
             mfn_t mmfn = sh_make_monitor_table(
                              v, v->arch.paging.mode->shadow.shadow_levels);
 
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2441,6 +2467,12 @@ static void sh_update_paging_modes(struct vcpu *v)
                 v->arch.hvm.monitor_table = pagetable_null();
                 new_mfn = sh_make_monitor_table(
                               v, v->arch.paging.mode->shadow.shadow_levels);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    sh_destroy_monitor_table(v, old_mfn,
+                                             old_mode->shadow.shadow_levels);
+                    return;
+                }
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
@@ -2526,7 +2558,12 @@ void sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index d5f42102a0..a0878d9ad7 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -700,7 +700,9 @@ mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned int shadow_levels)
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
+
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
     mfn_to_page(m4mfn)->shadow_flags = 4;
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 2ff78fe336..c07af0bd99 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2440,9 +2440,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 35efb1b984..738214f75e 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -383,7 +383,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:56:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:56:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431222.683961 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vB-0003Rz-BV; Thu, 27 Oct 2022 20:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431222.683961; Thu, 27 Oct 2022 20:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vB-0003Rr-8f; Thu, 27 Oct 2022 20:56:13 +0000
Received: by outflank-mailman (input) for mailman id 431222;
 Thu, 27 Oct 2022 20:56:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vA-0003Rb-JG
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vA-0003Jg-IY
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vA-00022t-Hw
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=M1OHwQqvtXRASHFq1g4cuuEzHHQ37nWPbsnED1R5PrA=; b=NwylHo+2rWgIY2VXTtxOTUDXfj
	ThJydaHHITii3+Avletx1Ed+1IA8eM8+6QHPLBsACajszU7UZRY9eQj/omD2Z4xvY08KXUnNaE8NJ
	WVAZ+CmoKf3erj0Tuwsu6TvXbhV3aXPBC2AjSvxs5lxJFZLR+YQxYPfSqtkdRVqtyXNY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1oo9vA-00022t-Hw@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:56:12 +0000

commit 745e0b300dc3f5000e6d48c273b405d4bcc29ba7
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:53:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:53:41 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d75dc2b9ed..787991233e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -245,6 +245,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -281,7 +284,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 2067c7d16b..9807f6ec6c 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -939,6 +939,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -991,7 +995,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 
@@ -1005,10 +1009,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1238,6 +1245,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:56:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:56:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431223.683965 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vL-0003YZ-Ec; Thu, 27 Oct 2022 20:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431223.683965; Thu, 27 Oct 2022 20:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vL-0003YS-Bw; Thu, 27 Oct 2022 20:56:23 +0000
Received: by outflank-mailman (input) for mailman id 431223;
 Thu, 27 Oct 2022 20:56:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vK-0003Xz-ND
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vK-0003KJ-Li
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vK-00023I-Ky
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=fam3Gh/kvxmruxYqy2dklcIEC/umBbVCqCi7m8iBcpk=; b=efy9thmGYgfoOneXTFSI1LcyUV
	R2qxsZrTkeiP60omNpBcRN1xStNq3vpDxRO0sUeWPHPUC7sGpMfdcgIm70rWdvI2AH0Bhq4W7xM0T
	RyuZC+Y8qv4BmgJIsZVlItfGESSE07VTQA67OFdPHmBQ1NpuH5JjFDG/I5lmbUfUYDIc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1oo9vK-00023I-Ky@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:56:22 +0000

commit 943635d8f8486209e4e48966507ad57963e96284
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:54:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:54:00 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 787991233e..aef2297450 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -265,6 +265,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 9807f6ec6c..9eb33eafc7 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1187,6 +1187,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1229,11 +1230,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1303,9 +1325,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:56:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:56:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431224.683968 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vV-0003nG-GH; Thu, 27 Oct 2022 20:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431224.683968; Thu, 27 Oct 2022 20:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vV-0003n8-Dd; Thu, 27 Oct 2022 20:56:33 +0000
Received: by outflank-mailman (input) for mailman id 431224;
 Thu, 27 Oct 2022 20:56:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vU-0003my-PP
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vU-0003KU-Oo
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vU-00023l-O9
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=vlQkIU6wZT9T1CBCnMCjD9fAlxrbkI/EWoA+Ves4NpU=; b=UvGI1KZMKYH3BMcG7HoP2DKtsC
	rgQxfywY9JEo0uw5aiq1k9gEft2TEWhMDvBeIkyHntw0/qqL+3bWbUJuZusDqvQ3UzQc9ybTVQ0/o
	mFCM6UQA17mxI9uX7omkqAVA3jJicBpCotzODsqP1mM88L2pIbBBIsuv2QJLnu8cWVx4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1oo9vU-00023l-O9@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:56:32 +0000

commit f5959ed715e19cf2844656477dbf74c2f576c9d4
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 14:54:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:54:21 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 42 +++++++++++++++++++++++++----------------
 xen/arch/x86/mm/shadow/common.c | 12 ++++++++++++
 3 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 0d39981550..a4356893bd 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2381,12 +2380,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index aef2297450..a44fcfd95e 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -546,24 +547,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        if ( d->arch.altp2m_visible_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_visible_eptp);
-            d->arch.altp2m_visible_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -578,6 +563,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -603,6 +590,7 @@ void hap_vcpu_teardown(struct vcpu *v)
 void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -611,6 +599,28 @@ void hap_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         hap_vcpu_teardown(v);
 
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+        FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d); /* Keep various asserts happy */
 
     if ( d->arch.paging.hap.total_pages != 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 9eb33eafc7..ac9a1ae078 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2824,8 +2824,17 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
     paging_lock(d);
 
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2946,6 +2955,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:56:43 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:56:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431225.683973 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vf-0003qj-Hr; Thu, 27 Oct 2022 20:56:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431225.683973; Thu, 27 Oct 2022 20:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vf-0003qc-F9; Thu, 27 Oct 2022 20:56:43 +0000
Received: by outflank-mailman (input) for mailman id 431225;
 Thu, 27 Oct 2022 20:56:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9ve-0003qP-Sl
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9ve-0003M5-S6
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9ve-00024B-RS
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=FnzhdADDRlHtwghmdK4z2U3bhuN8E/+XqlaUHKZ+hXA=; b=Sv3H4tcyHubWzzrOWz9I47r5GE
	amdEc9SDJn3iltIqjbyOObRywCIn5lW6/rhpPq5n3aBliqm28L+EmrPMXVGfqG2QePW5UeYxTuXMH
	8HFRZrk4qAhAbtTCoKavpQ7VE1421FQvGmiAMNeNdH8FXBKUtn6SK2s+kPmI6d8cKiug=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1oo9ve-00024B-RS@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:56:42 +0000

commit a603386b422f5cb4c5e2639a7e20a1d99dba2175
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 14:54:44 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:54:44 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a44fcfd95e..1f9a157a0c 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -548,17 +548,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -612,14 +612,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d); /* Keep various asserts happy */
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index aba4f17cbe..8781df9dda 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -749,12 +749,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -773,8 +774,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ac9a1ae078..3b0d781991 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2770,8 +2770,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2824,7 +2828,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
     for_each_vcpu ( d, v )
         shadow_vcpu_teardown(v);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2945,7 +2951,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index c3c16748e7..2db9ab0122 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -574,7 +574,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:56:53 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:56:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431226.683977 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vp-0003tv-JX; Thu, 27 Oct 2022 20:56:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431226.683977; Thu, 27 Oct 2022 20:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vp-0003to-Gi; Thu, 27 Oct 2022 20:56:53 +0000
Received: by outflank-mailman (input) for mailman id 431226;
 Thu, 27 Oct 2022 20:56:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vo-0003tc-W4
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vo-0003MF-VP
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:52 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vo-00024b-Uj
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:56:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=bCmYsx1FhwJ6HKby9XbK0uDyUXlSXiQc8poCyPNHMYo=; b=Yc6CWM4Lnrb8oeUWxM9XuJ4fVb
	VDcn04Ldppc0KQjaQZS69qoCUvHHVJE7ME44Exxcpv8Xct7Shqrqwjn7FclxDisT19Fd2zW10htTU
	kNIN43oL+lEZohuO999Knhrs65SJNeDNGMrU6lr1cnhrSJwkqMOoXYm8ZMIToAYMtMuA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1oo9vo-00024b-Uj@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:56:52 +0000

commit 755a9b52844de3e1e47aa1fc9991a4240ccfbf35
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:08 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:08 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in       |  5 +++++
 tools/libs/light/libxl_arch.h  |  4 ++++
 tools/libs/light/libxl_arm.c   | 14 ++++++++++++++
 tools/libs/light/libxl_utils.c |  9 ++-------
 tools/libs/light/libxl_x86.c   | 13 +++++++++++++
 5 files changed, 38 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index b98d161398..eda1e77ebd 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1768,6 +1768,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map and additional 512KB for extended regions. Users should
+adjust this value if bigger P2M pool size is needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 1522ecb97f..5a060c2c30 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -90,6 +90,10 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index eef1de0939..73a95e83af 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -154,6 +154,20 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of extended region. This default value is 128 MiB
+     * which should be enough for domains that are not running backend.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024 + 128);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index 4699c4a0a3..e276c0ee9c 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 1feadebb18..51362893cf 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -882,6 +882,19 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                     libxl_defbool_val(src->b_info.arch_x86.msr_relaxed));
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
+
 /*
  * Local variables:
  * mode: C
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:57:03 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:57:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431227.683981 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vz-0003zG-Mt; Thu, 27 Oct 2022 20:57:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431227.683981; Thu, 27 Oct 2022 20:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9vz-0003z6-Ju; Thu, 27 Oct 2022 20:57:03 +0000
Received: by outflank-mailman (input) for mailman id 431227;
 Thu, 27 Oct 2022 20:57:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vz-0003yv-39
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vz-0003MW-2M
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9vz-00025H-1h
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4cw7YxVEzxbdD8KT8esGisNYsgklcOWwUOJotO43t4M=; b=JOvwmDD18TpWFZrO2zxYoTUxFh
	JFgvi0gdSc2trDZhFzxQp8/HwKDjQ37wAjPUxUjcKNyWESnNbhpY5i78dWymfHfig22wglEMNzrWv
	1/TOtRawH50MbnCavv6mZnxWWrf529ZgtrWqKIiuLyIIQCVyhNqMiABSdAC1gdWrQWa0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1oo9vz-00025H-1h@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:57:03 +0000

commit 914fc8e8b4cc003e90d51bee0aef54687358530a
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:21 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:21 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 27418ee5ee..d8957dd872 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1599,7 +1685,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7f8ddd3f5c..2f31795ab9 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -40,6 +40,14 @@ struct vtimer {
     uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -51,6 +59,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index b3ba83283e..c9598740bd 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -218,6 +218,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:57:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431228.683986 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wA-00042W-Oj; Thu, 27 Oct 2022 20:57:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431228.683986; Thu, 27 Oct 2022 20:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wA-00042O-Lf; Thu, 27 Oct 2022 20:57:14 +0000
Received: by outflank-mailman (input) for mailman id 431228;
 Thu, 27 Oct 2022 20:57:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9w9-00042F-7Z
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9w9-0003Ma-6u
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9w9-00025k-4n
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=WYT9Odw3b2ghJVlZ89zO4ctw4+NqzVRmsZf6SSYzSws=; b=fyxcVq844JwwqE2AdcTK+Y0ecO
	dOCrBdqS3oI+szgQ//IS3ressxAeQm9UEgCE/+t9LFBhprBBEVwpOIjV1ZSLMEXpmMkqM8L/IWrtD
	XBJnxOslcsJfaOLS82/cKHreaPbyfehZZa/9ZokzqfrSptmzFnJLGPyH5dgVH1vdzfDc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1oo9w9-00025k-4n@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:57:13 +0000

commit 3a16da801e14b8ff996b6f7408391ce488abd925
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:40 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:40 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libs/light/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c        | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 73a95e83af..22a0c561bb 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -131,6 +131,18 @@ int libxl__arch_domain_create(libxl__gc *gc,
                               libxl__domain_build_state *state,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1baf25c3d9..9bf72e6930 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -47,11 +47,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:57:24 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:57:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431229.683989 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wK-00045P-Q0; Thu, 27 Oct 2022 20:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431229.683989; Thu, 27 Oct 2022 20:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wK-00045H-NG; Thu, 27 Oct 2022 20:57:24 +0000
Received: by outflank-mailman (input) for mailman id 431229;
 Thu, 27 Oct 2022 20:57:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wJ-000453-Aq
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wJ-0003N4-A9
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wJ-00026B-9T
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=02bzn0yZKkdDbXrO5OLIpfVukVRPabJbZbN5qM+RAnk=; b=q/xMwsYm3VyvRpFxljTU7HmPy/
	FBlkffJXL+5A/qkcQ24BAYCCLFlmwweXku9VjXV3U6fQqbY18e2WMooAFayimh3Bza7E/+yi3LEFE
	9KFYPX7sq0z2H1JYZfH0nVSATa4iiKdvFh2xBPiZfdxBD/p5umydqwN9E/i1XQCEUv4s=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1oo9wJ-00026B-9T@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:57:23 +0000

commit 44e9dcc48b81bca202a5b31926125a6a59a4c72e
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 14:55:53 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:55:53 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  6 ++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 5 files changed, 118 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 71895663a4..d92ccc56ff 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -182,6 +182,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 2694c39127..a818f33a1a 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -997,6 +997,7 @@ enum {
     PROG_page,
     PROG_mapping,
     PROG_p2m,
+    PROG_p2m_pool,
     PROG_done,
 };
 
@@ -1062,6 +1063,11 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(p2m_pool):
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
     PROGRESS(done):
         break;
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d02bacbcd1..8aec3755ca 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2833,6 +2833,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2924,6 +2939,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2933,6 +2950,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9bf72e6930..c8fdeb1240 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -50,6 +50,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -66,9 +69,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d8957dd872..b2d856a801 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -50,6 +50,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -751,7 +799,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -878,7 +926,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -902,7 +950,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1641,7 +1689,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1665,6 +1713,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:57:34 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431230.683993 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wU-000489-RY; Thu, 27 Oct 2022 20:57:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431230.683993; Thu, 27 Oct 2022 20:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wU-000481-Ok; Thu, 27 Oct 2022 20:57:34 +0000
Received: by outflank-mailman (input) for mailman id 431230;
 Thu, 27 Oct 2022 20:57:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wT-00047k-EU
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wT-0003ND-Dt
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wT-00026c-CY
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rTwZZKeZ3CtxHIoTuJxUn7VV8jrrQQSUWD/ifWHKozo=; b=ZEIk1tSYW9mYvf+mhATlJ+w7so
	1VYBe6QDNH0xsfdApGbVx7yB2elNantwK/HZpIBTvRXKbBhSzA/GOHf887wGico+N/HKdk9V3B4Ak
	37FiFItgUO49x4LF/nCXKwIxC99DZYh6n37HR31k0aXbby8S9XIA+QSk6dE2GWOiBWnk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1oo9wT-00026c-CY@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:57:33 +0000

commit 32cb81501c8b858fe9a451650804ec3024a8b364
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:56:29 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:56:29 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 4c742cd8fe..d8ca645b96 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2613,9 +2613,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2652,11 +2651,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
             reduce_status_for_pin(rd, act, status, readonly);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:57:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:57:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431231.683997 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9we-0004Bs-U9; Thu, 27 Oct 2022 20:57:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431231.683997; Thu, 27 Oct 2022 20:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9we-0004Bk-RW; Thu, 27 Oct 2022 20:57:44 +0000
Received: by outflank-mailman (input) for mailman id 431231;
 Thu, 27 Oct 2022 20:57:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wd-0004BP-Hr
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wd-0003NN-H9
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wd-000273-GU
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Pgw/BgykX/8VdeA0h7v3z8GUzFcWVk1QuJeEdEF6fUQ=; b=VDY4Mfeee6PutJDRgkP9QCECpA
	7DnRlQQ1wMTDjCuc6gbdGuN2AfXCK1tFYEZA1Z8aCdXuCxzL0eOxTVZjMFLYEfhSp2LV3YQYcTlGC
	xge3na9t5bKWN4XGHOLwXevfJsBNaOrh07BX/omTd0NMEvrlyYxP2Q2H3eGBBISGXRII=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] tools/libxl: Replace deprecated -soundhw on QEMU command line
Message-Id: <E1oo9wd-000273-GU@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:57:43 +0000

commit e85e2a3c17b6cd38de041cdaf14d9efdcdabad1a
Author:     Anthony PERARD <anthony.perard@citrix.com>
AuthorDate: Tue Oct 11 14:59:10 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:59:10 2022 +0200

    tools/libxl: Replace deprecated -soundhw on QEMU command line
    
    -soundhw is deprecated since 825ff02911c9 ("audio: add soundhw
    deprecation notice"), QEMU v5.1, and is been remove for upcoming v7.1
    by 039a68373c45 ("introduce -audio as a replacement for -soundhw").
    
    Instead we can just add the sound card with "-device", for most option
    that "-soundhw" could handle. "-device" is an option that existed
    before QEMU 1.0, and could already be used to add audio hardware.
    
    The list of possible option for libxl's "soundhw" is taken the list
    from QEMU 7.0.
    
    The list of options for "soundhw" are listed in order of preference in
    the manual. The first three (hda, ac97, es1370) are PCI devices and
    easy to test on Linux, and the last four are ISA devices which doesn't
    seems to work out of the box on linux.
    
    The sound card 'pcspk' isn't listed even if it used to be accepted by
    '-soundhw' because QEMU crash when trying to add it to a Xen domain.
    Also, it wouldn't work with "-device" might need to be "-machine
    pcspk-audiodev=default" instead.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    master commit: 62ca138c2c052187783aca3957d3f47c4dcfd683
    master date: 2022-08-18 09:25:50 +0200
---
 docs/man/xl.cfg.5.pod.in                  |  6 +++---
 tools/libs/light/libxl_dm.c               | 19 ++++++++++++++++++-
 tools/libs/light/libxl_types_internal.idl | 10 ++++++++++
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index eda1e77ebd..ab7541f22c 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2545,9 +2545,9 @@ The form serial=DEVICE is also accepted for backwards compatibility.
 
 =item B<soundhw="DEVICE">
 
-Select the virtual sound card to expose to the guest. The valid
-devices are defined by the device model configuration, please see the
-B<qemu(1)> manpage for details. The default is not to export any sound
+Select the virtual sound card to expose to the guest. The valid devices are
+B<hda>, B<ac97>, B<es1370>, B<adlib>, B<cs4231a>, B<gus>, B<sb16> if there are
+available with the device model QEMU. The default is not to export any sound
 device.
 
 =item B<vkb_device=BOOLEAN>
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 04bf5d8563..fc264a3a13 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1204,6 +1204,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     uint64_t ram_size;
     const char *path, *chardev;
     bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
+    int rc;
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1531,7 +1532,23 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
         }
         if (b_info->u.hvm.soundhw) {
-            flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, NULL);
+            libxl__qemu_soundhw soundhw;
+
+            rc = libxl__qemu_soundhw_from_string(b_info->u.hvm.soundhw, &soundhw);
+            if (rc) {
+                LOGD(ERROR, guest_domid, "Unknown soundhw option '%s'", b_info->u.hvm.soundhw);
+                return ERROR_INVAL;
+            }
+
+            switch (soundhw) {
+            case LIBXL__QEMU_SOUNDHW_HDA:
+                flexarray_vappend(dm_args, "-device", "intel-hda",
+                                  "-device", "hda-duplex", NULL);
+                break;
+            default:
+                flexarray_append_pair(dm_args, "-device",
+                                      (char*)libxl__qemu_soundhw_to_string(soundhw));
+            }
         }
         if (!libxl__acpi_defbool_val(b_info)) {
             flexarray_append(dm_args, "-no-acpi");
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21dbb..caa08d3229 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -55,3 +55,13 @@ libxl__device_action = Enumeration("device_action", [
     (1, "ADD"),
     (2, "REMOVE"),
     ])
+
+libxl__qemu_soundhw = Enumeration("qemu_soundhw", [
+    (1, "ac97"),
+    (2, "adlib"),
+    (3, "cs4231a"),
+    (4, "es1370"),
+    (5, "gus"),
+    (6, "hda"),
+    (7, "sb16"),
+    ])
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:57:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431232.684001 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wo-0004Ed-Vw; Thu, 27 Oct 2022 20:57:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431232.684001; Thu, 27 Oct 2022 20:57:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wo-0004EW-T5; Thu, 27 Oct 2022 20:57:54 +0000
Received: by outflank-mailman (input) for mailman id 431232;
 Thu, 27 Oct 2022 20:57:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wn-0004EM-Kh
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wn-0003NR-K2
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wn-00027S-JS
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:57:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RLo8VMa0142glcz3Cl4u3Jb2o+VOLeAxOdEIOzrRdiM=; b=tK8oMzhG8KOHWtAEyMjcgbc64o
	N+9VMRN5/NMNGMXAjodL6MbIYSxW08HCPwmgPXNhS6RU8lDb+0Mxccdp/O0M0AP1MdPr2B+c7Heo8
	tCw3UbBDmgHvrBai1PYH70NdXZ0g8QdnVwExgVDqwByoQtOUALxhE6WJgsshhwH4QY1c=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
Message-Id: <E1oo9wn-00027S-JS@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:57:53 +0000

commit e8882bcfe35520e950ba60acd6e67e65f1ce90a8
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 14:59:26 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:59:26 2022 +0200

    x86/CPUID: surface suitable value in EBX of XSTATE subleaf 1
    
    While the SDM isn't very clear about this, our present behavior make
    Linux 5.19 unhappy. As of commit 8ad7e8f69695 ("x86/fpu/xsave: Support
    XSAVEC in the kernel") they're using this CPUID output also to size
    the compacted area used by XSAVEC. Getting back zero there isn't really
    liked, yet for PV that's the default on capable hardware: XSAVES isn't
    exposed to PV domains.
    
    Considering that the size reported is that of the compacted save area,
    I view Linux'es assumption as appropriate (short of the SDM properly
    considering the case). Therefore we need to populate the field also when
    only XSAVEC is supported for a guest.
    
    Fixes: 460b9a4b3630 ("x86/xsaves: enable xsaves/xrstors for hvm guest")
    Fixes: 8d050ed1097c ("x86: don't expose XSAVES capability to PV guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: c3bd0b83ea5b7c0da6542687436042eeea1e7909
    master date: 2022-08-24 14:23:59 +0200
---
 xen/arch/x86/cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index ff335f1639..a647331f47 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1060,7 +1060,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         switch ( subleaf )
         {
         case 1:
-            if ( p->xstate.xsaves )
+            if ( p->xstate.xsavec || p->xstate.xsaves )
             {
                 /*
                  * TODO: Figure out what to do for XSS state.  VT-x manages
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:58:05 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:58:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431233.684004 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wz-0004HF-1d; Thu, 27 Oct 2022 20:58:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431233.684004; Thu, 27 Oct 2022 20:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9wy-0004H7-Ut; Thu, 27 Oct 2022 20:58:04 +0000
Received: by outflank-mailman (input) for mailman id 431233;
 Thu, 27 Oct 2022 20:58:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wx-0004Gz-Nf
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wx-0003Ni-N1
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9wx-000283-MT
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=IUC9+Q3Hcv26myVhApY6H+AdKshbP30RAAjMq6dMZmI=; b=P5/mD1+ej2p5YRRxlQ1sWvfli1
	EZ7PlYDn8ETjD+DIbvvK7uK37o9ZUSQ6y0DQnkNA7jfPoN9jJWTB0p1ND2TWCPpqR1ySs5w0bPK6R
	H/ggt71PZjiaNuY325v6kRTsXVTEHDBHVD9tF3lCF3jiw4PZ+Q/bo5H5A0JbDUjw2PIc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/sched: introduce cpupool_update_node_affinity()
Message-Id: <E1oo9wx-000283-MT@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:58:03 +0000

commit d4e971ad12dd27913dffcf96b5de378ea7b476e1
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 14:59:40 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 14:59:40 2022 +0200

    xen/sched: introduce cpupool_update_node_affinity()
    
    For updating the node affinities of all domains in a cpupool add a new
    function cpupool_update_node_affinity().
    
    In order to avoid multiple allocations of cpumasks carve out memory
    allocation and freeing from domain_update_node_affinity() into new
    helpers, which can be used by cpupool_update_node_affinity().
    
    Modify domain_update_node_affinity() to take an additional parameter
    for passing the allocated memory in and to allocate and free the memory
    via the new helpers in case NULL was passed.
    
    This will help later to pre-allocate the cpumasks in order to avoid
    allocations in stop-machine context.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a83fa1e2b96ace65b45dde6954d67012633a082b
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 54 +++++++++++++++++++++++++++++++---------------
 xen/common/sched/cpupool.c | 39 ++++++++++++++++++---------------
 xen/common/sched/private.h |  7 ++++++
 xen/include/xen/sched.h    |  9 +++++++-
 4 files changed, 74 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index f07bd2681f..065a83eca9 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1824,9 +1824,28 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
     return ret;
 }
 
-void domain_update_node_affinity(struct domain *d)
+bool alloc_affinity_masks(struct affinity_masks *affinity)
 {
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
+    if ( !alloc_cpumask_var(&affinity->hard) )
+        return false;
+    if ( !alloc_cpumask_var(&affinity->soft) )
+    {
+        free_cpumask_var(affinity->hard);
+        return false;
+    }
+
+    return true;
+}
+
+void free_affinity_masks(struct affinity_masks *affinity)
+{
+    free_cpumask_var(affinity->soft);
+    free_cpumask_var(affinity->hard);
+}
+
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity)
+{
+    struct affinity_masks masks;
     cpumask_t *dom_affinity;
     const cpumask_t *online;
     struct sched_unit *unit;
@@ -1836,14 +1855,16 @@ void domain_update_node_affinity(struct domain *d)
     if ( !d->vcpu || !d->vcpu[0] )
         return;
 
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
+    if ( !affinity )
     {
-        free_cpumask_var(dom_cpumask);
-        return;
+        affinity = &masks;
+        if ( !alloc_affinity_masks(affinity) )
+            return;
     }
 
+    cpumask_clear(affinity->hard);
+    cpumask_clear(affinity->soft);
+
     online = cpupool_domain_master_cpumask(d);
 
     spin_lock(&d->node_affinity_lock);
@@ -1864,22 +1885,21 @@ void domain_update_node_affinity(struct domain *d)
          */
         for_each_sched_unit ( d, unit )
         {
-            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-                       unit->cpu_soft_affinity);
+            cpumask_or(affinity->hard, affinity->hard, unit->cpu_hard_affinity);
+            cpumask_or(affinity->soft, affinity->soft, unit->cpu_soft_affinity);
         }
         /* Filter out non-online cpus */
-        cpumask_and(dom_cpumask, dom_cpumask, online);
-        ASSERT(!cpumask_empty(dom_cpumask));
+        cpumask_and(affinity->hard, affinity->hard, online);
+        ASSERT(!cpumask_empty(affinity->hard));
         /* And compute the intersection between hard, online and soft */
-        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
+        cpumask_and(affinity->soft, affinity->soft, affinity->hard);
 
         /*
          * If not empty, the intersection of hard, soft and online is the
          * narrowest set we want. If empty, we fall back to hard&online.
          */
-        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-                           dom_cpumask : dom_cpumask_soft;
+        dom_affinity = cpumask_empty(affinity->soft) ? affinity->hard
+                                                     : affinity->soft;
 
         nodes_clear(d->node_affinity);
         for_each_cpu ( cpu, dom_affinity )
@@ -1888,8 +1908,8 @@ void domain_update_node_affinity(struct domain *d)
 
     spin_unlock(&d->node_affinity_lock);
 
-    free_cpumask_var(dom_cpumask_soft);
-    free_cpumask_var(dom_cpumask);
+    if ( affinity == &masks )
+        free_affinity_masks(affinity);
 }
 
 typedef long ret_t;
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 8c6e6eb9cc..45b6ff9956 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -401,6 +401,25 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
     return ret;
 }
 
+/* Update affinities of all domains in a cpupool. */
+static void cpupool_update_node_affinity(const struct cpupool *c)
+{
+    struct affinity_masks masks;
+    struct domain *d;
+
+    if ( !alloc_affinity_masks(&masks) )
+        return;
+
+    rcu_read_lock(&domlist_read_lock);
+
+    for_each_domain_in_cpupool(d, c)
+        domain_update_node_aff(d, &masks);
+
+    rcu_read_unlock(&domlist_read_lock);
+
+    free_affinity_masks(&masks);
+}
+
 /*
  * assign a specific cpu to a cpupool
  * cpupool_lock must be held
@@ -408,7 +427,6 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
-    struct domain *d;
     const cpumask_t *cpus;
 
     cpus = sched_get_opt_cpumask(c->gran, cpu);
@@ -433,12 +451,7 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    rcu_read_lock(&domlist_read_lock);
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return 0;
 }
@@ -447,18 +460,14 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
-    struct domain *d;
     int ret;
 
     if ( c != cpupool_cpu_moving )
         return -EADDRNOTAVAIL;
 
-    /*
-     * We need this for scanning the domain list, both in
-     * cpu_disable_scheduler(), and at the bottom of this function.
-     */
     rcu_read_lock(&domlist_read_lock);
     ret = cpu_disable_scheduler(cpu);
+    rcu_read_unlock(&domlist_read_lock);
 
     rcu_read_lock(&sched_res_rculock);
     cpus = get_sched_res(cpu)->cpus;
@@ -485,11 +494,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    for_each_domain_in_cpupool(d, c)
-    {
-        domain_update_node_affinity(d);
-    }
-    rcu_read_unlock(&domlist_read_lock);
+    cpupool_update_node_affinity(c);
 
     return ret;
 }
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index a870320146..2b04b01a0c 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -593,6 +593,13 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step,
         cpumask_copy(mask, unit->cpu_hard_affinity);
 }
 
+struct affinity_masks {
+    cpumask_var_t hard;
+    cpumask_var_t soft;
+};
+
+bool alloc_affinity_masks(struct affinity_masks *affinity);
+void free_affinity_masks(struct affinity_masks *affinity);
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 9671062360..3f4225738a 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -655,8 +655,15 @@ static inline void get_knownalive_domain(struct domain *d)
     ASSERT(!(atomic_read(&d->refcnt) & DOMAIN_DESTROYED));
 }
 
+struct affinity_masks;
+
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity);
-void domain_update_node_affinity(struct domain *d);
+void domain_update_node_aff(struct domain *d, struct affinity_masks *affinity);
+
+static inline void domain_update_node_affinity(struct domain *d)
+{
+    domain_update_node_aff(d, NULL);
+}
 
 /*
  * To be implemented by each architecture, sanity checking the configuration
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:58:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:58:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431234.684009 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9x9-0004KC-3Q; Thu, 27 Oct 2022 20:58:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431234.684009; Thu, 27 Oct 2022 20:58:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9x9-0004K3-0H; Thu, 27 Oct 2022 20:58:15 +0000
Received: by outflank-mailman (input) for mailman id 431234;
 Thu, 27 Oct 2022 20:58:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9x7-0004Jo-Qt
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9x7-0003Ns-QC
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9x7-00028S-PR
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=/JUStLLzWXjezHKxA09ty06S+HVwvAyD264Y91jKCeg=; b=FT1KODcJ8dGO4XUxIivKjhXsrl
	CyKWkYOPXTYVj43EhVGHtlwo3WkcpRPkwpiRX6CoL9CAZ7mTInLqQKIDkuBB3qGAi1VnfUtt8hYhE
	v75+FzUOBtdyoHYIHseaSCIx1fsa5js6AhJtwI2xlt25wUsNYIegrorq+h8pHKnEgZDY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
Message-Id: <E1oo9x7-00028S-PR@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:58:13 +0000

commit c377ceab0a007690a1e71c81a5232613c99e944d
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:00:05 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:05 2022 +0200

    xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
    
    In order to prepare not allocating or freeing memory from
    schedule_cpu_rm(), move this functionality to dedicated functions.
    
    For now call those functions from schedule_cpu_rm().
    
    No change of behavior expected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d42be6f83480b3ada286dc18444331a816be88a3
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 143 +++++++++++++++++++++++++++------------------
 xen/common/sched/private.h |  11 ++++
 2 files changed, 98 insertions(+), 56 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 065a83eca9..2decb1161a 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3221,6 +3221,75 @@ out:
     return ret;
 }
 
+/*
+ * Allocate all memory needed for free_cpu_rm_data(), as allocations cannot
+ * be made in stop_machine() context.
+ *
+ * Between alloc_cpu_rm_data() and the real cpu removal action the relevant
+ * contents of struct sched_resource can't change, as the cpu in question is
+ * locked against any other movement to or from cpupools, and the data copied
+ * by alloc_cpu_rm_data() is modified only in case the cpu in question is
+ * being moved from or to a cpupool.
+ */
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+{
+    struct cpu_rm_data *data;
+    const struct sched_resource *sr;
+    unsigned int idx;
+
+    rcu_read_lock(&sched_res_rculock);
+
+    sr = get_sched_res(cpu);
+    data = xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);
+    if ( !data )
+        goto out;
+
+    data->old_ops = sr->scheduler;
+    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
+    data->ppriv_old = sr->sched_priv;
+
+    for ( idx = 0; idx < sr->granularity - 1; idx++ )
+    {
+        data->sr[idx] = sched_alloc_res();
+        if ( data->sr[idx] )
+        {
+            data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
+            if ( !data->sr[idx]->sched_unit_idle )
+            {
+                sched_res_free(&data->sr[idx]->rcu);
+                data->sr[idx] = NULL;
+            }
+        }
+        if ( !data->sr[idx] )
+        {
+            while ( idx > 0 )
+                sched_res_free(&data->sr[--idx]->rcu);
+            XFREE(data);
+            goto out;
+        }
+
+        data->sr[idx]->curr = data->sr[idx]->sched_unit_idle;
+        data->sr[idx]->scheduler = &sched_idle_ops;
+        data->sr[idx]->granularity = 1;
+
+        /* We want the lock not to change when replacing the resource. */
+        data->sr[idx]->schedule_lock = sr->schedule_lock;
+    }
+
+ out:
+    rcu_read_unlock(&sched_res_rculock);
+
+    return data;
+}
+
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
+{
+    sched_free_udata(mem->old_ops, mem->vpriv_old);
+    sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+
+    xfree(mem);
+}
+
 /*
  * Remove a pCPU from its cpupool. Its scheduler becomes &sched_idle_ops
  * (the idle scheduler).
@@ -3229,53 +3298,23 @@ out:
  */
 int schedule_cpu_rm(unsigned int cpu)
 {
-    void *ppriv_old, *vpriv_old;
-    struct sched_resource *sr, **sr_new = NULL;
+    struct sched_resource *sr;
+    struct cpu_rm_data *data;
     struct sched_unit *unit;
-    struct scheduler *old_ops;
     spinlock_t *old_lock;
     unsigned long flags;
-    int idx, ret = -ENOMEM;
+    int idx = 0;
     unsigned int cpu_iter;
 
+    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        return -ENOMEM;
+
     rcu_read_lock(&sched_res_rculock);
 
     sr = get_sched_res(cpu);
-    old_ops = sr->scheduler;
-
-    if ( sr->granularity > 1 )
-    {
-        sr_new = xmalloc_array(struct sched_resource *, sr->granularity - 1);
-        if ( !sr_new )
-            goto out;
-        for ( idx = 0; idx < sr->granularity - 1; idx++ )
-        {
-            sr_new[idx] = sched_alloc_res();
-            if ( sr_new[idx] )
-            {
-                sr_new[idx]->sched_unit_idle = sched_alloc_unit_mem();
-                if ( !sr_new[idx]->sched_unit_idle )
-                {
-                    sched_res_free(&sr_new[idx]->rcu);
-                    sr_new[idx] = NULL;
-                }
-            }
-            if ( !sr_new[idx] )
-            {
-                for ( idx--; idx >= 0; idx-- )
-                    sched_res_free(&sr_new[idx]->rcu);
-                goto out;
-            }
-            sr_new[idx]->curr = sr_new[idx]->sched_unit_idle;
-            sr_new[idx]->scheduler = &sched_idle_ops;
-            sr_new[idx]->granularity = 1;
 
-            /* We want the lock not to change when replacing the resource. */
-            sr_new[idx]->schedule_lock = sr->schedule_lock;
-        }
-    }
-
-    ret = 0;
+    ASSERT(sr->granularity);
     ASSERT(sr->cpupool != NULL);
     ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus));
     ASSERT(!cpumask_test_cpu(cpu, sr->cpupool->cpu_valid));
@@ -3283,10 +3322,6 @@ int schedule_cpu_rm(unsigned int cpu)
     /* See comment in schedule_cpu_add() regarding lock switching. */
     old_lock = pcpu_schedule_lock_irqsave(cpu, &flags);
 
-    vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
-    ppriv_old = sr->sched_priv;
-
-    idx = 0;
     for_each_cpu ( cpu_iter, sr->cpus )
     {
         per_cpu(sched_res_idx, cpu_iter) = 0;
@@ -3300,27 +3335,27 @@ int schedule_cpu_rm(unsigned int cpu)
         else
         {
             /* Initialize unit. */
-            unit = sr_new[idx]->sched_unit_idle;
-            unit->res = sr_new[idx];
+            unit = data->sr[idx]->sched_unit_idle;
+            unit->res = data->sr[idx];
             unit->is_running = true;
             sched_unit_add_vcpu(unit, idle_vcpu[cpu_iter]);
             sched_domain_insert_unit(unit, idle_vcpu[cpu_iter]->domain);
 
             /* Adjust cpu masks of resources (old and new). */
             cpumask_clear_cpu(cpu_iter, sr->cpus);
-            cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus);
+            cpumask_set_cpu(cpu_iter, data->sr[idx]->cpus);
             cpumask_set_cpu(cpu_iter, &sched_res_mask);
 
             /* Init timer. */
-            init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
+            init_timer(&data->sr[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
 
             /* Last resource initializations and insert resource pointer. */
-            sr_new[idx]->master_cpu = cpu_iter;
-            set_sched_res(cpu_iter, sr_new[idx]);
+            data->sr[idx]->master_cpu = cpu_iter;
+            set_sched_res(cpu_iter, data->sr[idx]);
 
             /* Last action: set the new lock pointer. */
             smp_mb();
-            sr_new[idx]->schedule_lock = &sched_free_cpu_lock;
+            data->sr[idx]->schedule_lock = &sched_free_cpu_lock;
 
             idx++;
         }
@@ -3336,16 +3371,12 @@ int schedule_cpu_rm(unsigned int cpu)
     /* _Not_ pcpu_schedule_unlock(): schedule_lock may have changed! */
     spin_unlock_irqrestore(old_lock, flags);
 
-    sched_deinit_pdata(old_ops, ppriv_old, cpu);
-
-    sched_free_udata(old_ops, vpriv_old);
-    sched_free_pdata(old_ops, ppriv_old, cpu);
+    sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
-out:
     rcu_read_unlock(&sched_res_rculock);
-    xfree(sr_new);
+    free_cpu_rm_data(data, cpu);
 
-    return ret;
+    return 0;
 }
 
 struct scheduler *scheduler_get_default(void)
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 2b04b01a0c..e286849a13 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -600,6 +600,15 @@ struct affinity_masks {
 
 bool alloc_affinity_masks(struct affinity_masks *affinity);
 void free_affinity_masks(struct affinity_masks *affinity);
+
+/* Memory allocation related data for schedule_cpu_rm(). */
+struct cpu_rm_data {
+    const struct scheduler *old_ops;
+    void *ppriv_old;
+    void *vpriv_old;
+    struct sched_resource *sr[];
+};
+
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
 void schedule_dump(struct cpupool *c);
@@ -608,6 +617,8 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
 int schedule_cpu_rm(unsigned int cpu);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:58:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:58:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431235.684013 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xJ-0004NU-6J; Thu, 27 Oct 2022 20:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431235.684013; Thu, 27 Oct 2022 20:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xJ-0004NM-3k; Thu, 27 Oct 2022 20:58:25 +0000
Received: by outflank-mailman (input) for mailman id 431235;
 Thu, 27 Oct 2022 20:58:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xH-0004NA-Tu
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xH-0003OM-TB
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xH-00028t-Sc
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+cG6BMhERtYnrlYxSb0lOw39UdVYBsvWStn3/u4iJyM=; b=ojyiDbUBv0MQ/HBUlohuYRFH6V
	WiEGf+5NHIQw95nuvruwgk/i/oxcvjdlPZiVyZqy201KkD23yFUTBbMiVR8/mZnEprfZbEUtqDkiG
	utOCKWTjvIKE4H7ccY/z89DRLbJTaN2JrqA0wE7pAmat3o5ZwSQERoECTDX1EpVe0pCM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/sched: fix cpu hotplug
Message-Id: <E1oo9xH-00028t-Sc@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:58:23 +0000

commit 4f3204c2bc66db18c61600dd3e08bf1fd9584a1b
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:00:19 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:19 2022 +0200

    xen/sched: fix cpu hotplug
    
    Cpu unplugging is calling schedule_cpu_rm() via stop_machine_run() with
    interrupts disabled, thus any memory allocation or freeing must be
    avoided.
    
    Since commit 5047cd1d5dea ("xen/common: Use enhanced
    ASSERT_ALLOC_CONTEXT in xmalloc()") this restriction is being enforced
    via an assertion, which will now fail.
    
    Fix this by allocating needed memory before entering stop_machine_run()
    and freeing any memory only after having finished stop_machine_run().
    
    Fixes: 1ec410112cdd ("xen/sched: support differing granularity in schedule_cpu_[add/rm]()")
    Reported-by: Gao Ruifeng <ruifeng.gao@intel.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: d84473689611eed32fd90b27e614f28af767fa3f
    master date: 2022-09-05 11:42:30 +0100
---
 xen/common/sched/core.c    | 25 +++++++++++++----
 xen/common/sched/cpupool.c | 69 ++++++++++++++++++++++++++++++++++++----------
 xen/common/sched/private.h |  5 ++--
 3 files changed, 77 insertions(+), 22 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 2decb1161a..900aab8f66 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3231,7 +3231,7 @@ out:
  * by alloc_cpu_rm_data() is modified only in case the cpu in question is
  * being moved from or to a cpupool.
  */
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc)
 {
     struct cpu_rm_data *data;
     const struct sched_resource *sr;
@@ -3244,6 +3244,17 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
     if ( !data )
         goto out;
 
+    if ( aff_alloc )
+    {
+        if ( !alloc_affinity_masks(&data->affinity) )
+        {
+            XFREE(data);
+            goto out;
+        }
+    }
+    else
+        memset(&data->affinity, 0, sizeof(data->affinity));
+
     data->old_ops = sr->scheduler;
     data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
     data->ppriv_old = sr->sched_priv;
@@ -3264,6 +3275,7 @@ struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu)
         {
             while ( idx > 0 )
                 sched_res_free(&data->sr[--idx]->rcu);
+            free_affinity_masks(&data->affinity);
             XFREE(data);
             goto out;
         }
@@ -3286,6 +3298,7 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
 {
     sched_free_udata(mem->old_ops, mem->vpriv_old);
     sched_free_pdata(mem->old_ops, mem->ppriv_old, cpu);
+    free_affinity_masks(&mem->affinity);
 
     xfree(mem);
 }
@@ -3296,17 +3309,18 @@ void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu)
  * The cpu is already marked as "free" and not valid any longer for its
  * cpupool.
  */
-int schedule_cpu_rm(unsigned int cpu)
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *data)
 {
     struct sched_resource *sr;
-    struct cpu_rm_data *data;
     struct sched_unit *unit;
     spinlock_t *old_lock;
     unsigned long flags;
     int idx = 0;
     unsigned int cpu_iter;
+    bool free_data = !data;
 
-    data = alloc_cpu_rm_data(cpu);
+    if ( !data )
+        data = alloc_cpu_rm_data(cpu, false);
     if ( !data )
         return -ENOMEM;
 
@@ -3374,7 +3388,8 @@ int schedule_cpu_rm(unsigned int cpu)
     sched_deinit_pdata(data->old_ops, data->ppriv_old, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-    free_cpu_rm_data(data, cpu);
+    if ( free_data )
+        free_cpu_rm_data(data, cpu);
 
     return 0;
 }
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 45b6ff9956..b5a948639a 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -402,22 +402,28 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
 }
 
 /* Update affinities of all domains in a cpupool. */
-static void cpupool_update_node_affinity(const struct cpupool *c)
+static void cpupool_update_node_affinity(const struct cpupool *c,
+                                         struct affinity_masks *masks)
 {
-    struct affinity_masks masks;
+    struct affinity_masks local_masks;
     struct domain *d;
 
-    if ( !alloc_affinity_masks(&masks) )
-        return;
+    if ( !masks )
+    {
+        if ( !alloc_affinity_masks(&local_masks) )
+            return;
+        masks = &local_masks;
+    }
 
     rcu_read_lock(&domlist_read_lock);
 
     for_each_domain_in_cpupool(d, c)
-        domain_update_node_aff(d, &masks);
+        domain_update_node_aff(d, masks);
 
     rcu_read_unlock(&domlist_read_lock);
 
-    free_affinity_masks(&masks);
+    if ( masks == &local_masks )
+        free_affinity_masks(masks);
 }
 
 /*
@@ -451,15 +457,17 @@ static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, NULL);
 
     return 0;
 }
 
-static int cpupool_unassign_cpu_finish(struct cpupool *c)
+static int cpupool_unassign_cpu_finish(struct cpupool *c,
+                                       struct cpu_rm_data *mem)
 {
     int cpu = cpupool_moving_cpu;
     const cpumask_t *cpus;
+    struct affinity_masks *masks = mem ? &mem->affinity : NULL;
     int ret;
 
     if ( c != cpupool_cpu_moving )
@@ -482,7 +490,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
      */
     if ( !ret )
     {
-        ret = schedule_cpu_rm(cpu);
+        ret = schedule_cpu_rm(cpu, mem);
         if ( ret )
             cpumask_andnot(&cpupool_free_cpus, &cpupool_free_cpus, cpus);
         else
@@ -494,7 +502,7 @@ static int cpupool_unassign_cpu_finish(struct cpupool *c)
     }
     rcu_read_unlock(&sched_res_rculock);
 
-    cpupool_update_node_affinity(c);
+    cpupool_update_node_affinity(c, masks);
 
     return ret;
 }
@@ -558,7 +566,7 @@ static long cpupool_unassign_cpu_helper(void *info)
                       cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu);
     spin_lock(&cpupool_lock);
 
-    ret = cpupool_unassign_cpu_finish(c);
+    ret = cpupool_unassign_cpu_finish(c, NULL);
 
     spin_unlock(&cpupool_lock);
     debugtrace_printk("cpupool_unassign_cpu ret=%ld\n", ret);
@@ -701,7 +709,7 @@ static int cpupool_cpu_add(unsigned int cpu)
  * This function is called in stop_machine context, so we can be sure no
  * non-idle vcpu is active on the system.
  */
-static void cpupool_cpu_remove(unsigned int cpu)
+static void cpupool_cpu_remove(unsigned int cpu, struct cpu_rm_data *mem)
 {
     int ret;
 
@@ -709,7 +717,7 @@ static void cpupool_cpu_remove(unsigned int cpu)
 
     if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
     {
-        ret = cpupool_unassign_cpu_finish(cpupool0);
+        ret = cpupool_unassign_cpu_finish(cpupool0, mem);
         BUG_ON(ret);
     }
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
@@ -775,7 +783,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu)
         {
             ret = cpupool_unassign_cpu_start(c, master_cpu);
             BUG_ON(ret);
-            ret = cpupool_unassign_cpu_finish(c);
+            ret = cpupool_unassign_cpu_finish(c, NULL);
             BUG_ON(ret);
         }
     }
@@ -993,12 +1001,24 @@ void dump_runq(unsigned char key)
 static int cpu_callback(
     struct notifier_block *nfb, unsigned long action, void *hcpu)
 {
+    static struct cpu_rm_data *mem;
+
     unsigned int cpu = (unsigned long)hcpu;
     int rc = 0;
 
     switch ( action )
     {
     case CPU_DOWN_FAILED:
+        if ( system_state <= SYS_STATE_active )
+        {
+            if ( mem )
+            {
+                free_cpu_rm_data(mem, cpu);
+                mem = NULL;
+            }
+            rc = cpupool_cpu_add(cpu);
+        }
+        break;
     case CPU_ONLINE:
         if ( system_state <= SYS_STATE_active )
             rc = cpupool_cpu_add(cpu);
@@ -1006,12 +1026,31 @@ static int cpu_callback(
     case CPU_DOWN_PREPARE:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
+        {
             rc = cpupool_cpu_remove_prologue(cpu);
+            if ( !rc )
+            {
+                ASSERT(!mem);
+                mem = alloc_cpu_rm_data(cpu, true);
+                rc = mem ? 0 : -ENOMEM;
+            }
+        }
         break;
     case CPU_DYING:
         /* Suspend/Resume don't change assignments of cpus to cpupools. */
         if ( system_state <= SYS_STATE_active )
-            cpupool_cpu_remove(cpu);
+        {
+            ASSERT(mem);
+            cpupool_cpu_remove(cpu, mem);
+        }
+        break;
+    case CPU_DEAD:
+        if ( system_state <= SYS_STATE_active )
+        {
+            ASSERT(mem);
+            free_cpu_rm_data(mem, cpu);
+            mem = NULL;
+        }
         break;
     case CPU_RESUME_FAILED:
         cpupool_cpu_remove_forced(cpu);
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index e286849a13..0126a4bb9e 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -603,6 +603,7 @@ void free_affinity_masks(struct affinity_masks *affinity);
 
 /* Memory allocation related data for schedule_cpu_rm(). */
 struct cpu_rm_data {
+    struct affinity_masks affinity;
     const struct scheduler *old_ops;
     void *ppriv_old;
     void *vpriv_old;
@@ -617,9 +618,9 @@ struct scheduler *scheduler_alloc(unsigned int sched_id);
 void scheduler_free(struct scheduler *sched);
 int cpu_disable_scheduler(unsigned int cpu);
 int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu);
+struct cpu_rm_data *alloc_cpu_rm_data(unsigned int cpu, bool aff_alloc);
 void free_cpu_rm_data(struct cpu_rm_data *mem, unsigned int cpu);
-int schedule_cpu_rm(unsigned int cpu);
+int schedule_cpu_rm(unsigned int cpu, struct cpu_rm_data *mem);
 int sched_move_domain(struct domain *d, struct cpupool *c);
 struct cpupool *cpupool_get_by_id(unsigned int poolid);
 void cpupool_put(struct cpupool *pool);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:58:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:58:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431236.684017 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xT-0004QE-7z; Thu, 27 Oct 2022 20:58:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431236.684017; Thu, 27 Oct 2022 20:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xT-0004Q7-5J; Thu, 27 Oct 2022 20:58:35 +0000
Received: by outflank-mailman (input) for mailman id 431236;
 Thu, 27 Oct 2022 20:58:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xS-0004Px-0G
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xR-0003OQ-Vw
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xR-00029I-VP
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Tuvyfd/ctWt/JwStrmeaY/S7PAl60B2pVKweY4wnjI4=; b=kzM4AUBQC4eKBbzvoXd6PiCrSA
	5GhocYj64ZQTcQL0o27GfNpXFTlDF+iDWYOkWQRH+NRJXFjvC29NlXOOfOiskG4rBMT2xz+BaXuuK
	DfkI2XHYYYiSOIk+h51FtVoDKlLEZw/KQu71BjIh02ypPe5081AGeLhbLvKdcxld0kkM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
Message-Id: <E1oo9xR-00029I-VP@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:58:33 +0000

commit 2b694dd2932be78431b14257f23b738f2fc8f6a1
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:00:33 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:33 2022 +0200

    Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS
    
    I haven't been able to find evidence of "-nopie" ever having been a
    supported compiler option. The correct spelling is "-no-pie".
    Furthermore like "-pie" this is an option which is solely passed to the
    linker. The compiler only recognizes "-fpie" / "-fPIE" / "-fno-pie", and
    it doesn't infer these options from "-pie" / "-no-pie".
    
    Add the compiler recognized form, but for the possible case of the
    variable also being used somewhere for linking keep the linker option as
    well (with corrected spelling).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    
    Build: Drop -no-pie from EMBEDDED_EXTRA_CFLAGS
    
    This breaks all Clang builds, as demostrated by Gitlab CI.
    
    Contrary to the description in ecd6b9759919, -no-pie is not even an option
    passed to the linker.  GCC's actual behaviour is to inhibit the passing of
    -pie to the linker, as well as selecting different cr0 artefacts to be linked.
    
    EMBEDDED_EXTRA_CFLAGS is not used for $(CC)-doing-linking, and not liable to
    gain such a usecase.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>
    Fixes: ecd6b9759919 ("Config.mk: correct PIE-related option(s) in EMBEDDED_EXTRA_CFLAGS")
    master commit: ecd6b9759919fa6335b0be1b5fc5cce29a30c4f1
    master date: 2022-09-08 09:25:26 +0200
    master commit: 13a7c0074ac8fb31f6c0485429b7a20a1946cb22
    master date: 2022-09-27 15:40:42 -0700
---
 Config.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 46de3cd1e0..6f95067b8d 100644
--- a/Config.mk
+++ b/Config.mk
@@ -197,7 +197,7 @@ endif
 APPEND_LDFLAGS += $(foreach i, $(APPEND_LIB), -L$(i))
 APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
 
-EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
+EMBEDDED_EXTRA_CFLAGS := -fno-pie -fno-stack-protector -fno-stack-protector-all
 EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
 
 XEN_EXTFILES_URL ?= http://xenbits.xen.org/xen-extfiles
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:58:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:58:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431237.684021 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xd-0004TA-9W; Thu, 27 Oct 2022 20:58:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431237.684021; Thu, 27 Oct 2022 20:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xd-0004T2-6t; Thu, 27 Oct 2022 20:58:45 +0000
Received: by outflank-mailman (input) for mailman id 431237;
 Thu, 27 Oct 2022 20:58:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xc-0004Su-3F
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xc-0003Oi-2Z
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xc-00029j-1u
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=p+WV1VjuHpGPu/H1XSVkvNwYvxdYU+8z1JeEmWVbQpw=; b=F+dObScUfovsxxCL7bJc/wfWkp
	CcgJoqI2D+wBpkAs4vLc/yDpysnc5ZgZAYF9PIZ9rSOhQKLIhFIWPVO1VQvFP9WNJhQaDhA4kg0Xm
	R0tQjRS+I8hwAobWnXdhESu2a6YSkMrf77rpeySXc2BWeK4bgr+MwnLk8m41i2aNWLRo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] tools/xenstore: minor fix of the migration stream doc
Message-Id: <E1oo9xc-00029j-1u@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:58:44 +0000

commit 49510071ee93905378e54664778760ed3908d447
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:00:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:00:59 2022 +0200

    tools/xenstore: minor fix of the migration stream doc
    
    Drop mentioning the non-existent read-only socket in the migration
    stream description document.
    
    The related record field was removed in commit 8868a0e3f674 ("docs:
    update the xenstore migration stream documentation).
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: ace1d2eff80d3d66c37ae765dae3e3cb5697e5a4
    master date: 2022-09-08 09:25:58 +0200
---
 docs/designs/xenstore-migration.md | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 5f1155273e..78530bbb0e 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -129,11 +129,9 @@ xenstored state that needs to be restored.
 | `evtchn-fd`    | The file descriptor used to communicate with |
 |                | the event channel driver                     |
 
-xenstored will resume in the original process context. Hence `rw-socket-fd` and
-`ro-socket-fd` simply specify the file descriptors of the sockets. Sockets
-are not always used, however, and so -1 will be used to denote an unused
-socket.
-
+xenstored will resume in the original process context. Hence `rw-socket-fd`
+simply specifies the file descriptor of the socket. Sockets are not always
+used, however, and so -1 will be used to denote an unused socket.
 
 \pagebreak
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:58:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:58:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431238.684027 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xn-0004W1-Cf; Thu, 27 Oct 2022 20:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431238.684027; Thu, 27 Oct 2022 20:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xn-0004Vt-8O; Thu, 27 Oct 2022 20:58:55 +0000
Received: by outflank-mailman (input) for mailman id 431238;
 Thu, 27 Oct 2022 20:58:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xm-0004Vf-6E
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xm-0003Om-5c
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xm-0002Bf-53
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:58:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=vg7U16GjIZ7s5118me75KyoGoEBj0rJgy9bXRAAT9EE=; b=NAalKu6LgiaPjq3i5NudpUJqY6
	xgzPJ7zFhgMGI1L3yqMP70p+blx7F1/56GgexyEwAcsdJmAoelEW+N4r8agANeSzjCIb5SrpO/H16
	Vo3m3k7T28HjoNHkdNBJ/pOughn4v6h2Df17KjRN3D9bPwnKPHgo3Wr1tcI/ZyWaK02c=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/gnttab: fix gnttab_acquire_resource()
Message-Id: <E1oo9xm-0002Bf-53@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:58:54 +0000

commit b9560762392c01b3ee84148c07be8017cb42dbc9
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Tue Oct 11 15:01:22 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:01:22 2022 +0200

    xen/gnttab: fix gnttab_acquire_resource()
    
    Commit 9dc46386d89d ("gnttab: work around "may be used uninitialized"
    warning") was wrong, as vaddrs can legitimately be NULL in case
    XENMEM_resource_grant_table_id_status was specified for a grant table
    v1. This would result in crashes in debug builds due to
    ASSERT_UNREACHABLE() triggering.
    
    Check vaddrs only to be NULL in the rc == 0 case.
    
    Expand the tests in tools/tests/resource to tickle this path, and verify that
    using XENMEM_resource_grant_table_id_status on a v1 grant table fails.
    
    Fixes: 9dc46386d89d ("gnttab: work around "may be used uninitialized" warning")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com> # xen
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 52daa6a8483e4fbd6757c9d1b791e23931791608
    master date: 2022-09-09 16:28:38 +0100
---
 tools/tests/resource/test-resource.c | 15 +++++++++++++++
 xen/common/grant_table.c             |  2 +-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/tools/tests/resource/test-resource.c b/tools/tests/resource/test-resource.c
index 0557f8a1b5..37dfff4dcd 100644
--- a/tools/tests/resource/test-resource.c
+++ b/tools/tests/resource/test-resource.c
@@ -106,6 +106,21 @@ static void test_gnttab(uint32_t domid, unsigned int nr_frames,
     if ( rc )
         return fail("    Fail: Unmap grant table %d - %s\n",
                     errno, strerror(errno));
+
+    /*
+     * Verify that an attempt to map the status frames fails, as the domain is
+     * in gnttab v1 mode.
+     */
+    res = xenforeignmemory_map_resource(
+        fh, domid, XENMEM_resource_grant_table,
+        XENMEM_resource_grant_table_id_status, 0, 1,
+        (void **)&gnttab, PROT_READ | PROT_WRITE, 0);
+
+    if ( res )
+    {
+        fail("    Fail: Managed to map gnttab v2 status frames in v1 mode\n");
+        xenforeignmemory_unmap_resource(fh, res);
+    }
 }
 
 static void test_domain_configurations(void)
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index d8ca645b96..76272b3c8a 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -4142,7 +4142,7 @@ int gnttab_acquire_resource(
      * on non-error paths, and hence it needs setting to NULL at the top of the
      * function.  Leave some runtime safety.
      */
-    if ( !vaddrs )
+    if ( !rc && !vaddrs )
     {
         ASSERT_UNREACHABLE();
         rc = -ENODATA;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:59:05 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431239.684029 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xx-0004Ym-Cu; Thu, 27 Oct 2022 20:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431239.684029; Thu, 27 Oct 2022 20:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9xx-0004Yf-A0; Thu, 27 Oct 2022 20:59:05 +0000
Received: by outflank-mailman (input) for mailman id 431239;
 Thu, 27 Oct 2022 20:59:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xw-0004YV-9b
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xw-0003P5-8v
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9xw-0002CM-7o
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=IfvztdRRZ5qmancmhuiujje1hqhUYAgAG82DCLdA1Wg=; b=UkCwli/DuXMfEyOKXq+7o9kmtv
	S8brZzk6QniQ1lcExFb9wy6O80roOgy9GSccBHcpnVc2GKmiCB8Mqc6gnbOy6o+cV3PcYB3TOWQp8
	4JLFI0JXy900FWFilhNExEMHnICUp5YsZgaX5v5+hmlE6JKAa8ygyje7eAv2LsSqqbAU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
Message-Id: <E1oo9xw-0002CM-7o@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:59:04 +0000

commit 3f4da85ca8816f6617529c80850eaddd80ea0f1f
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:01:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:01:36 2022 +0200

    x86: wire up VCPUOP_register_vcpu_time_memory_area for 32-bit guests
    
    Forever sinced its introduction VCPUOP_register_vcpu_time_memory_area
    was available only to native domains. Linux, for example, would attempt
    to use it irrespective of guest bitness (including in its so called
    PVHVM mode) as long as it finds XEN_PVCLOCK_TSC_STABLE_BIT set (which we
    set only for clocksource=tsc, which in turn needs engaging via command
    line option).
    
    Fixes: a5d39947cb89 ("Allow guests to register secondary vcpu_time_info")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: b726541d94bd0a80b5864d17a2cd2e6d73a3fe0a
    master date: 2022-09-29 14:47:45 +0200
---
 xen/arch/x86/x86_64/domain.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index c46dccc25a..d51d993447 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -54,6 +54,26 @@ arch_compat_vcpu_op(
         break;
     }
 
+    case VCPUOP_register_vcpu_time_memory_area:
+    {
+        struct compat_vcpu_register_time_memory_area area = { .addr.p = 0 };
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.h, arg, 1) )
+            break;
+
+        if ( area.addr.h.c != area.addr.p ||
+             !compat_handle_okay(area.addr.h, 1) )
+            break;
+
+        rc = 0;
+        guest_from_compat_handle(v->arch.time_info_guest, area.addr.h);
+
+        force_update_vcpu_system_time(v);
+
+        break;
+    }
+
     case VCPUOP_get_physid:
         rc = arch_do_vcpu_op(cmd, v, arg);
         break;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:59:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431240.684033 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9y7-0004bS-Ej; Thu, 27 Oct 2022 20:59:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431240.684033; Thu, 27 Oct 2022 20:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9y7-0004bK-Bo; Thu, 27 Oct 2022 20:59:15 +0000
Received: by outflank-mailman (input) for mailman id 431240;
 Thu, 27 Oct 2022 20:59:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9y6-0004bB-CM
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9y6-0003P9-Bh
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9y6-0002D8-B8
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=fYmFeRvstJIavnlVYbtgZKTWUwrEUAQziov3uKOcIBo=; b=xG/8hw2HiUJAFu0Fd9XSU2aszE
	p2WTU8Tj8qJDD/LQ5Ozpv5NmOB6vPVvcBHhAWDf0Vc8mKcob9mMmtykQM81vZvhila0SmgDpdDqbR
	wN9U4KPjkGKea/4voaAB39y6pcl9dpB7yZaA8li2/lhtJ2zJz/v/YnJU1EkHopAeng7o=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/vpmu: Fix race-condition in vpmu_load
Message-Id: <E1oo9y6-0002D8-B8@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:59:14 +0000

commit 1bce7fb1f702da4f7a749c6f1457ecb20bf74fca
Author:     Tamas K Lengyel <tamas.lengyel@intel.com>
AuthorDate: Tue Oct 11 15:01:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:01:48 2022 +0200

    x86/vpmu: Fix race-condition in vpmu_load
    
    The vPMU code-bases attempts to perform an optimization on saving/reloading the
    PMU context by keeping track of what vCPU ran on each pCPU. When a pCPU is
    getting scheduled, checks if the previous vCPU isn't the current one. If so,
    attempts a call to vpmu_save_force. Unfortunately if the previous vCPU is
    already getting scheduled to run on another pCPU its state will be already
    runnable, which results in an ASSERT failure.
    
    Fix this by always performing a pmu context save in vpmu_save when called from
    vpmu_switch_from, and do a vpmu_load when called from vpmu_switch_to.
    
    While this presents a minimal overhead in case the same vCPU is getting
    rescheduled on the same pCPU, the ASSERT failure is avoided and the code is a
    lot easier to reason about.
    
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: defa4e51d20a143bdd4395a075bf0933bb38a9a4
    master date: 2022-09-30 09:53:49 +0200
---
 xen/arch/x86/cpu/vpmu.c | 42 ++++--------------------------------------
 1 file changed, 4 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 16e91a3694..b6c2ec3cd0 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -368,58 +368,24 @@ void vpmu_save(struct vcpu *v)
     vpmu->last_pcpu = pcpu;
     per_cpu(last_vcpu, pcpu) = v;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v, 0) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
     apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
 }
 
 int vpmu_load(struct vcpu *v, bool_t from_guest)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return 0;
 
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
          (!has_vlapic(vpmu_vcpu(vpmu)->domain) &&
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:59:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:59:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431241.684037 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9yI-0004fE-IP; Thu, 27 Oct 2022 20:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431241.684037; Thu, 27 Oct 2022 20:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9yI-0004f6-Fn; Thu, 27 Oct 2022 20:59:26 +0000
Received: by outflank-mailman (input) for mailman id 431241;
 Thu, 27 Oct 2022 20:59:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9yG-0004et-FJ
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9yG-0003Pb-Ed
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9yG-0002Df-Dy
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=eaKuT9IlLBDSnY7XWB79HB4NVLl4LdbwOkcWwP7rsQQ=; b=g5/kj9g0MIWNv1WqFhozrb/FkD
	rwWiBUkPLwTgWmCyHs5/9lZfYtA2ypWWvSJJ/Jl/ICuAXXN3a2zKpKecoQTt5rCeogcr6rptwuqpA
	6IvJK/teaSQ2A2Lu1C6Mz4G8YgxF29gV15k7Hb0jHXEErZescFOC4YsGFJx1hoYwNKoM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] arm/p2m: Rework p2m_init()
Message-Id: <E1oo9yG-0002Df-Dy@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:59:24 +0000

commit 86cb37447548420e41ff953a7372972f6154d6d1
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:21:11 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:52:43 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index b2d856a801..4f7d923ad9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1730,7 +1730,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1739,11 +1739,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1759,8 +1754,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1773,13 +1766,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Thu Oct 27 20:59:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Oct 2022 20:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431242.684041 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9yS-0004iJ-LL; Thu, 27 Oct 2022 20:59:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431242.684041; Thu, 27 Oct 2022 20:59:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oo9yS-0004iD-HN; Thu, 27 Oct 2022 20:59:36 +0000
Received: by outflank-mailman (input) for mailman id 431242;
 Thu, 27 Oct 2022 20:59:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9yQ-0004hR-IL
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9yQ-0003Pf-Hk
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oo9yQ-0002E6-HC
 for xen-changelog@lists.xenproject.org; Thu, 27 Oct 2022 20:59:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=eCaqgKCIBnzdO7tU5OS/i+pg+cZcUTjuLH5VFJs41u4=; b=UYD3B4lwDu+BtHiH21vF6uA+Mj
	PoCEyCkQA8FGFQX+D3qwP3ajXVRGMVT6Wj7CSsjs1Y1xZ7GEzleDNWobG23aRL1IwEkOCzu9eZ75p
	/V7aLxx3+NGJin2t0f++tk2/hwlbcBfuv/2Mpz/nfWxn5GcTLUZWO1YO6UQAbB2qsiew=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1oo9yQ-0002E6-HC@xenbits.xenproject.org>
Date: Thu, 27 Oct 2022 20:59:34 +0000

commit e5a5bdeba6a0c3eacd2ba39c1ee36b3c54e77dca
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:21:12 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 20:54:26 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index a818f33a1a..c7feaa323a 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1059,7 +1059,7 @@ int domain_relinquish_resources(struct domain *d)
             return ret;
 
     PROGRESS(p2m):
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4f7d923ad9..6f87e17c1d 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1661,7 +1661,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1669,6 +1669,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1692,7 +1695,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1712,7 +1715,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1779,6 +1795,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index c9598740bd..b2725206e8 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -194,14 +194,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -266,6 +270,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:11:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:11:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431269.684103 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDtn-0000eM-9O; Fri, 28 Oct 2022 01:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431269.684103; Fri, 28 Oct 2022 01:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDtn-0000eA-6G; Fri, 28 Oct 2022 01:11:03 +0000
Received: by outflank-mailman (input) for mailman id 431269;
 Fri, 28 Oct 2022 01:11:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDtm-0000e4-C0
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDtm-0006Vj-Av
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDtm-0004xy-99
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=/OAGiBoKsML53nH3Q6hB4JGg7Y+bE5Yduo1P7Hm1i0Y=; b=nlnICs4JzsGZyE0mBo4ei5IJkZ
	+VT4NgZdRTj+pQnLtaTK7UAnbllO/LA86jK+D2Jt0jpMKriw1gfqrRGmHOcyP0k3zvmbNRSPknq1E
	7ibwcHrUeYK29mxtVzrBvIoh6HOBr5EKcZ14LyAMdJQ479vQQ9ISJvrC0Ky/5FHppdeU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] xen/arm: p2m: Prevent adding mapping when domain is dying
Message-Id: <E1ooDtm-0004xy-99@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:11:02 +0000

commit 5475195ec490a1cbe226ebe7b709119928673cc8
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:47:15 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:47:15 2022 +0200

    xen/arm: p2m: Prevent adding mapping when domain is dying
    
    During the domain destroy process, the domain will still be accessible
    until it is fully destroyed. So does the P2M because we don't bail
    out early if is_dying is non-zero. If a domain has permission to
    modify the other domain's P2M (i.e. dom0, or a stubdomain), then
    foreign mapping can be added past relinquish_p2m_mapping().
    
    Therefore, we need to prevent mapping to be added when the domain
    is dying. This commit prevents such adding of mapping by adding the
    d->is_dying check to p2m_set_entry(). Also this commit enhances the
    check in relinquish_p2m_mapping() to make sure that no mappings can
    be added in the P2M after the P2M lock is released.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3ebe773293e3b945460a3d6f54f3b91915397bab
    master date: 2022-10-11 14:20:18 +0200
---
 xen/arch/arm/p2m.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 993fe4ded2..ff74577638 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1089,6 +1089,15 @@ int p2m_set_entry(struct p2m_domain *p2m,
 {
     int rc = 0;
 
+    /*
+     * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+     * be dropped in relinquish_p2m_mapping(). As the P2M will still
+     * be accessible after, we need to prevent mapping to be added when the
+     * domain is dying.
+     */
+    if ( unlikely(p2m->domain->is_dying) )
+        return -ENOMEM;
+
     while ( nr )
     {
         unsigned long mask;
@@ -1578,6 +1587,8 @@ int relinquish_p2m_mapping(struct domain *d)
     unsigned int order;
     gfn_t start, end;
 
+    BUG_ON(!d->is_dying);
+    /* No mappings can be added in the P2M after the P2M lock is released. */
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:11:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:11:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431270.684107 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDtx-0000fx-Aq; Fri, 28 Oct 2022 01:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431270.684107; Fri, 28 Oct 2022 01:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDtx-0000fp-84; Fri, 28 Oct 2022 01:11:13 +0000
Received: by outflank-mailman (input) for mailman id 431270;
 Fri, 28 Oct 2022 01:11:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDtw-0000fh-F2
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDtw-0006Vt-EG
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDtw-0004yP-DM
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ugFqQ30rw5C+dClWO+nHU1Oyv2nccOWH8aDA7Y3AQYg=; b=MHRNp16dOz9OPTuX1zDc0TwEcR
	LRjOx3XOgspson9D45xZqC6Mj0kwr8tPBV1dWXFxhbfie1T6gcEqyRH4t366hN5mAggdPDwH9LkGU
	sDTRikz0ndoaiQYyO0EaDPIo6sxx9E2gDdNkCFfVMsIE6KKywDhZ9ghi5jWYtPZGJPiU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] xen/arm: p2m: Handle preemption when freeing intermediate page tables
Message-Id: <E1ooDtw-0004yP-DM@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:11:12 +0000

commit 4e38cc1baea00384b208b762bccc624b0e070fed
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:47:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:47:41 2022 +0200

    xen/arm: p2m: Handle preemption when freeing intermediate page tables
    
    At the moment the P2M page tables will be freed when the domain structure
    is freed without any preemption. As the P2M is quite large, iterating
    through this may take more time than it is reasonable without intermediate
    preemption (to run softirqs and perhaps scheduler).
    
    Split p2m_teardown() in two parts: one preemptible and called when
    relinquishing the resources, the other one non-preemptible and called
    when freeing the domain structure.
    
    As we are now freeing the P2M pages early, we also need to prevent
    further allocation if someone call p2m_set_entry() past p2m_teardown()
    (I wasn't able to prove this will never happen). This is done by
    the checking domain->is_dying from previous patch in p2m_set_entry().
    
    Similarly, we want to make sure that no-one can accessed the free
    pages. Therefore the root is cleared before freeing pages.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3202084566bba0ef0c45caf8c24302f83d92f9c8
    master date: 2022-10-11 14:20:56 +0200
---
 xen/arch/arm/domain.c        | 12 +++++++++--
 xen/arch/arm/p2m.c           | 47 +++++++++++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/domain.h |  1 +
 xen/include/asm-arm/p2m.h    | 13 ++++++++++--
 4 files changed, 66 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ddeccb992c..1e24a7dbb4 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -775,10 +775,10 @@ fail:
 void arch_domain_destroy(struct domain *d)
 {
     /* IOMMU page table is shared with P2M, always call
-     * iommu_domain_destroy() before p2m_teardown().
+     * iommu_domain_destroy() before p2m_final_teardown().
      */
     iommu_domain_destroy(d);
-    p2m_teardown(d);
+    p2m_final_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
     free_xenheap_page(d->shared_info);
@@ -1014,6 +1014,14 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+        d->arch.relmem = RELMEM_p2m;
+        /* Fallthrough */
+
+    case RELMEM_p2m:
+        ret = p2m_teardown(d);
+        if ( ret )
+            return ret;
+
         d->arch.relmem = RELMEM_done;
         /* Fallthrough */
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ff74577638..42638787a2 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1495,17 +1495,58 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    unsigned long count = 0;
     struct page_info *pg;
+    unsigned int i;
+    int rc = 0;
+
+    p2m_write_lock(p2m);
+
+    /*
+     * We are about to free the intermediate page-tables, so clear the
+     * root to prevent any walk to use them.
+     */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(p2m->root + i);
+
+    /*
+     * The domain will not be scheduled anymore, so in theory we should
+     * not need to flush the TLBs. Do it for safety purpose.
+     *
+     * Note that all the devices have already been de-assigned. So we don't
+     * need to flush the IOMMU TLB here.
+     */
+    p2m_force_tlb_flush_sync(p2m);
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+    {
+        free_domheap_page(pg);
+        count++;
+        /* Arbitrarily preempt every 512 iterations */
+        if ( !(count % 512) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    p2m_write_unlock(p2m);
+
+    return rc;
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* p2m not actually initialized */
     if ( !p2m->domain )
         return;
 
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+    ASSERT(page_list_empty(&p2m->pages));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index f1776c6c08..9b44a9648c 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -62,6 +62,7 @@ struct arch_domain
         RELMEM_xen,
         RELMEM_page,
         RELMEM_mapping,
+        RELMEM_p2m,
         RELMEM_done,
     } relmem;
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5fdb6e8183..20df621271 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -171,8 +171,17 @@ void setup_virt_paging(void);
 /* Init the datastructures for later use by the p2m code */
 int p2m_init(struct domain *d);
 
-/* Return all the p2m resources to Xen. */
-void p2m_teardown(struct domain *d);
+/*
+ * The P2M resources are freed in two parts:
+ *  - p2m_teardown() will be called when relinquish the resources. It
+ *    will free large resources (e.g. intermediate page-tables) that
+ *    requires preemption.
+ *  - p2m_final_teardown() will be called when domain struct is been
+ *    freed. This *cannot* be preempted and therefore one small
+ *    resources should be freed here.
+ */
+int p2m_teardown(struct domain *d);
+void p2m_final_teardown(struct domain *d);
 
 /*
  * Remove mapping refcount on each mapping page in the p2m
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:11:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431271.684111 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDu7-0000ic-CV; Fri, 28 Oct 2022 01:11:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431271.684111; Fri, 28 Oct 2022 01:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDu7-0000iT-9b; Fri, 28 Oct 2022 01:11:23 +0000
Received: by outflank-mailman (input) for mailman id 431271;
 Fri, 28 Oct 2022 01:11:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDu6-0000iK-Ik
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDu6-0006WA-Hr
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDu6-0004yo-H4
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=FxZlolxK60N0ClvpKJ1s6VuXNyY5SEvIyCaGj3+E2hY=; b=RxTu/xf1djpUlJ2zXJJa0ZKa8v
	ltaHzvI4WVawLEQNwifgVr8G0mNUaTbaPR4pZGXOlku02FVh+5Y8fpu9VjLjNX7q9+USbHXIwKUur
	mHZno5so68c0+ehUXcgQFPxfLG59GQbTJ02LjtEUWpaCcDffinj3C1obPr7XGNgChyBM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] x86/p2m: add option to skip root pagetable removal in p2m_teardown()
Message-Id: <E1ooDu6-0004yo-H4@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:11:22 +0000

commit 763f965d04c5eb01890f697aaaaa9120d552672a
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:48:01 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:48:01 2022 +0200

    x86/p2m: add option to skip root pagetable removal in p2m_teardown()
    
    Add a new parameter to p2m_teardown() in order to select whether the
    root page table should also be freed.  Note that all users are
    adjusted to pass the parameter to remove the root page tables, so
    behavior is not modified.
    
    No functional change intended.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Suggested-by: Julien Grall <julien@xen.org>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 1df52a270225527ae27bfa2fc40347bf93b78357
    master date: 2022-10-11 14:21:23 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  6 +++---
 xen/arch/x86/mm/p2m.c           | 20 ++++++++++++++++----
 xen/arch/x86/mm/shadow/common.c |  4 ++--
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 9aac006d65..c2d425a4b1 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -521,18 +521,18 @@ void hap_final_teardown(struct domain *d)
         }
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
+            p2m_teardown(d->arch.altp2m_p2m[i], true);
     }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i]);
+        p2m_teardown(d->arch.nested_p2m[i], true);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 91f7b7760c..859edfc95b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,11 +737,11 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
-    struct page_info *pg;
+    struct page_info *pg, *root_pg = NULL;
     struct domain *d;
 
     if (p2m == NULL)
@@ -751,10 +751,22 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
     ASSERT(atomic_read(&d->shr_pages) == 0);
-    p2m->phys_table = pagetable_null();
+
+    if ( remove_root )
+        p2m->phys_table = pagetable_null();
+    else if ( !pagetable_is_null(p2m->phys_table) )
+    {
+        root_pg = pagetable_get_page(p2m->phys_table);
+        clear_domain_page(pagetable_get_mfn(p2m->phys_table));
+    }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
+        if ( pg != root_pg )
+            d->arch.paging.free_page(d, pg);
+
+    if ( root_pg )
+        page_list_add(root_pg, &p2m->pages);
+
     p2m_unlock(p2m);
 }
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dd8d9240ea..68d2679c7a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2684,7 +2684,7 @@ int shadow_enable(struct domain *d, u32 mode)
     paging_unlock(d);
  out_unlocked:
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m);
+        p2m_teardown(p2m, true);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2835,7 +2835,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d));
+    p2m_teardown(p2m_get_hostp2m(d), true);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 807dc4b1a9..cab4ca60fa 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -599,7 +599,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:11:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:11:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431272.684114 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDuH-0000m6-F0; Fri, 28 Oct 2022 01:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431272.684114; Fri, 28 Oct 2022 01:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDuH-0000ly-CV; Fri, 28 Oct 2022 01:11:33 +0000
Received: by outflank-mailman (input) for mailman id 431272;
 Fri, 28 Oct 2022 01:11:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuG-0000lq-N3
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuG-0006Wf-MB
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuG-0004zD-KL
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=wMM7b7VBnUqeXEOu57gX1MDbrVjQSDHtsck42ZmFw0c=; b=sDsFTEvmIdRAYfMZJItXtoTtIQ
	UXZ1PB2TQv5kO5/T7t3m0PDrdtjn9LrRMMYy+I6hb7QbC8fiFTdVonxSnHgQOy34Cda5KUArotnku
	omjumaLkbXt+bH6dvs1zk6z3upzbMupTexBF6s/8ozpSmmd7M7vmB8Nhh15t+9pnIhoY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] x86/HAP: adjust monitor table related error handling
Message-Id: <E1ooDuG-0004zD-KL@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:11:32 +0000

commit 0021c269786e0442d6f922d110d957867fff421d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:48:23 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:48:23 2022 +0200

    x86/HAP: adjust monitor table related error handling
    
    hap_make_monitor_table() will return INVALID_MFN if it encounters an
    error condition, but hap_update_paging_modes() wasn’t handling this
    value, resulting in an inappropriate value being stored in
    monitor_table. This would subsequently misguide at least
    hap_vcpu_teardown(). Avoid this by bailing early.
    
    Further, when a domain has/was already crashed or (perhaps less
    important as there's no such path known to lead here) is already dying,
    avoid calling domain_crash() on it again - that's at best confusing.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5b44a61180f4f2e4f490a28400c884dd357ff45d
    master date: 2022-10-11 14:21:56 +0200
---
 xen/arch/x86/mm/hap/hap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index c2d425a4b1..d3931b4e49 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -39,6 +39,7 @@
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
+#include <public/sched.h>
 
 #include "private.h"
 
@@ -405,8 +406,13 @@ static mfn_t hap_make_monitor_table(struct vcpu *v)
     return m4mfn;
 
  oom:
-    printk(XENLOG_G_ERR "out of memory building monitor pagetable\n");
-    domain_crash(d);
+    if ( !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    {
+        printk(XENLOG_G_ERR "%pd: out of memory building monitor pagetable\n",
+               d);
+        domain_crash(d);
+    }
     return INVALID_MFN;
 }
 
@@ -693,6 +699,9 @@ static void hap_update_paging_modes(struct vcpu *v)
     if ( pagetable_is_null(v->arch.monitor_table) )
     {
         mfn_t mmfn = hap_make_monitor_table(v);
+
+        if ( mfn_eq(mmfn, INVALID_MFN) )
+            goto unlock;
         v->arch.monitor_table = pagetable_from_mfn(mmfn);
         make_cr3(v, mmfn);
         hvm_update_host_cr3(v);
@@ -701,6 +710,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     /* CR3 is effectively updated by a mode change. Flush ASIDs, etc. */
     hap_update_cr3(v, 0, false);
 
+ unlock:
     paging_unlock(d);
     put_gfn(d, cr3_gfn);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:11:43 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:11:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431273.684119 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDuR-0000p4-HB; Fri, 28 Oct 2022 01:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431273.684119; Fri, 28 Oct 2022 01:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDuR-0000ox-E4; Fri, 28 Oct 2022 01:11:43 +0000
Received: by outflank-mailman (input) for mailman id 431273;
 Fri, 28 Oct 2022 01:11:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuQ-0000op-QH
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuQ-0006YN-PU
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuQ-0004zf-Oh
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=76t7U4JH4+2o36HvsNst0ja7SVjtPta2y7HwQ9iyYbo=; b=3uUcw8v/sYMikZ+cYRl5qcHuI8
	rcn6+4R5QH4karJgtGTDsPFCeoZ9ixbjVoxa+P1WEL+O+JliWQF9e2ZsBwdyCsCO2D7X3Fwii5H+9
	oPWMGweBAPGRiZr4k0ZXvQN7en/ICww92kdkXMNA+yOG3GAxFVXmESEVyth2/f0a34Ko=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] x86/shadow: tolerate failure of sh_set_toplevel_shadow()
Message-Id: <E1ooDuQ-0004zf-Oh@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:11:42 +0000

commit aa7891098cc46a7a11b2d823cd8386be8b04c453
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:48:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:48:59 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/multi.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 61e9cc951e..bb78b387eb 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3861,6 +3861,7 @@ sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
@@ -4014,6 +4015,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         flush_tlb_mask(d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
      * current values of the guest's four l3es. */
@@ -4059,6 +4065,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         flush_tlb_mask(d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[0]);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:11:53 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:11:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431274.684123 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDub-0000rP-Il; Fri, 28 Oct 2022 01:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431274.684123; Fri, 28 Oct 2022 01:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDub-0000rH-Fv; Fri, 28 Oct 2022 01:11:53 +0000
Received: by outflank-mailman (input) for mailman id 431274;
 Fri, 28 Oct 2022 01:11:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDua-0000r9-Tc
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDua-0006Yy-Sl
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:52 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDua-000509-S7
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:11:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=OlO9mHTr4tz45WC999hyoWPs12QEuaVa2vrpBXZLDaQ=; b=Te+hwQjLrIFXahMQo9q3LTs3ST
	ud6VTrwZzWKKv9Oqg5TDp0FkHdoBfDuPDzBuh6k1lqKU3BDQS+Pc0LawJB8DYEceXQah+UdgZ4iWt
	CNObwSI72uWxTCZ4too0mH8wQ1ZmUpLWme8jAFk+YdXJVlc7BMNV9GJ6BdreBO6k0k8Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] x86/shadow: tolerate failure in shadow_prealloc()
Message-Id: <E1ooDua-000509-S7@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:11:52 +0000

commit 181ff7aced0e2afec4cfa57e015d01e0a0b3be59
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:49:18 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:49:18 2022 +0200

    x86/shadow: tolerate failure in shadow_prealloc()
    
    Prevent _shadow_prealloc() from calling BUG() when unable to fulfill
    the pre-allocation and instead return true/false.  Modify
    shadow_prealloc() to crash the domain on allocation failure (if the
    domain is not already dying), as shadow cannot operate normally after
    that.  Modify callers to also gracefully handle {_,}shadow_prealloc()
    failing to fulfill the request.
    
    Note this in turn requires adjusting the callers of
    sh_make_monitor_table() also to handle it returning INVALID_MFN.
    sh_update_paging_modes() is also modified to add additional error
    paths in case of allocation failure, some of those will return with
    null monitor page tables (and the domain likely crashed).  This is no
    different that current error paths, but the newly introduced ones are
    more likely to trigger.
    
    The now added failure points in sh_update_paging_modes() also require
    that on some error return paths the previous structures are cleared,
    and thus monitor table is null.
    
    While there adjust the 'type' parameter type of shadow_prealloc() to
    unsigned int rather than u32.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: b7f93c6afb12b6061e2d19de2f39ea09b569ac68
    master date: 2022-10-11 14:22:53 +0200
---
 xen/arch/x86/mm/shadow/common.c  | 62 ++++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/multi.c   | 21 ++++++++++----
 xen/arch/x86/mm/shadow/private.h |  3 +-
 3 files changed, 65 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 68d2679c7a..ab8cf7aa8c 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -36,6 +36,7 @@
 #include <asm/shadow.h>
 #include <asm/hvm/ioreq.h>
 #include <xen/numa.h>
+#include <public/sched.h>
 #include "private.h"
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
@@ -896,14 +897,15 @@ static inline void trace_shadow_prealloc_unpin(struct domain *d, mfn_t smfn)
 
 /* Make sure there are at least count order-sized pages
  * available in the shadow page pool. */
-static void _shadow_prealloc(struct domain *d, unsigned int pages)
+static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
 {
     struct vcpu *v;
     struct page_info *sp, *t;
     mfn_t smfn;
     int i;
 
-    if ( d->arch.paging.shadow.free_pages >= pages ) return;
+    if ( d->arch.paging.shadow.free_pages >= pages )
+        return true;
 
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
@@ -919,7 +921,8 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
         sh_unpin(d, smfn);
 
         /* See if that freed up enough space */
-        if ( d->arch.paging.shadow.free_pages >= pages ) return;
+        if ( d->arch.paging.shadow.free_pages >= pages )
+            return true;
     }
 
     /* Stage two: all shadow pages are in use in hierarchies that are
@@ -940,7 +943,7 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
                 if ( d->arch.paging.shadow.free_pages >= pages )
                 {
                     flush_tlb_mask(d->dirty_cpumask);
-                    return;
+                    return true;
                 }
             }
         }
@@ -953,7 +956,12 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.total_pages,
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
-    BUG();
+
+    ASSERT(d->is_dying);
+
+    flush_tlb_mask(d->dirty_cpumask);
+
+    return false;
 }
 
 /* Make sure there are at least count pages of the order according to
@@ -961,9 +969,19 @@ static void _shadow_prealloc(struct domain *d, unsigned int pages)
  * This must be called before any calls to shadow_alloc().  Since this
  * will free existing shadows to make room, it must be called early enough
  * to avoid freeing shadows that the caller is currently working on. */
-void shadow_prealloc(struct domain *d, u32 type, unsigned int count)
+bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    return _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+
+    if ( !ret && !d->is_dying &&
+         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+        /*
+         * Failing to allocate memory required for shadow usage can only result in
+         * a domain crash, do it here rather that relying on every caller to do it.
+         */
+        domain_crash(d);
+
+    return ret;
 }
 
 /* Deliberately free all the memory we can: this will tear down all of
@@ -1186,7 +1204,7 @@ void shadow_free(struct domain *d, mfn_t smfn)
 static struct page_info *
 shadow_alloc_p2m_page(struct domain *d)
 {
-    struct page_info *pg;
+    struct page_info *pg = NULL;
 
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
@@ -1204,16 +1222,18 @@ shadow_alloc_p2m_page(struct domain *d)
                     d->arch.paging.shadow.p2m_pages,
                     shadow_min_acceptable_pages(d));
         }
-        paging_unlock(d);
-        return NULL;
+        goto out;
     }
 
-    shadow_prealloc(d, SH_type_p2m_table, 1);
+    if ( !shadow_prealloc(d, SH_type_p2m_table, 1) )
+        goto out;
+
     pg = mfn_to_page(shadow_alloc(d, SH_type_p2m_table, 0));
     d->arch.paging.shadow.p2m_pages++;
     d->arch.paging.shadow.total_pages--;
     ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
 
+ out:
     paging_unlock(d);
 
     return pg;
@@ -1304,7 +1324,9 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
         else if ( d->arch.paging.shadow.total_pages > pages )
         {
             /* Need to return memory to domheap */
-            _shadow_prealloc(d, 1);
+            if ( !_shadow_prealloc(d, 1) )
+                return -ENOMEM;
+
             sp = page_list_remove_head(&d->arch.paging.shadow.freelist);
             ASSERT(sp);
             /*
@@ -2396,12 +2418,13 @@ static void sh_update_paging_modes(struct vcpu *v)
     if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
+
+        if ( !shadow_prealloc(d, SH_type_oos_snapshot, SHADOW_OOS_PAGES) )
+            return;
+
         for(i = 0; i < SHADOW_OOS_PAGES; i++)
-        {
-            shadow_prealloc(d, SH_type_oos_snapshot, 1);
             v->arch.paging.shadow.oos_snapshot[i] =
                 shadow_alloc(d, SH_type_oos_snapshot, 0);
-        }
     }
 #endif /* OOS */
 
@@ -2463,6 +2486,10 @@ static void sh_update_paging_modes(struct vcpu *v)
         if ( pagetable_is_null(v->arch.monitor_table) )
         {
             mfn_t mmfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+
+            if ( mfn_eq(mmfn, INVALID_MFN) )
+                return;
+
             v->arch.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2500,6 +2527,11 @@ static void sh_update_paging_modes(struct vcpu *v)
                 old_mfn = pagetable_get_mfn(v->arch.monitor_table);
                 v->arch.monitor_table = pagetable_null();
                 new_mfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+                if ( mfn_eq(new_mfn, INVALID_MFN) )
+                {
+                    old_mode->shadow.destroy_monitor_table(v, old_mfn);
+                    return;
+                }
                 v->arch.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index bb78b387eb..a58493fb01 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1524,7 +1524,8 @@ sh_make_monitor_table(struct vcpu *v)
     ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0);
 
     /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+        return INVALID_MFN;
 
     {
         mfn_t m4mfn;
@@ -3052,9 +3053,14 @@ static int sh_page_fault(struct vcpu *v,
      * Preallocate shadow pages *before* removing writable accesses
      * otherwhise an OOS L1 might be demoted and promoted again with
      * writable mappings. */
-    shadow_prealloc(d,
-                    SH_type_l1_shadow,
-                    GUEST_PAGING_LEVELS < 4 ? 1 : GUEST_PAGING_LEVELS - 1);
+    if ( !shadow_prealloc(d, SH_type_l1_shadow,
+                          GUEST_PAGING_LEVELS < 4
+                          ? 1 : GUEST_PAGING_LEVELS - 1) )
+    {
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+        return 0;
+    }
 
     rc = gw_remove_write_accesses(v, va, &gw);
 
@@ -3871,7 +3877,12 @@ sh_set_toplevel_shadow(struct vcpu *v,
     if ( !mfn_valid(smfn) )
     {
         /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
+        if ( !shadow_prealloc(d, root_type, 1) )
+        {
+            new_entry = pagetable_null();
+            goto install_new_entry;
+        }
+
         /* Shadow the page. */
         smfn = sh_make_shadow(v, gmfn, root_type);
     }
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 3217777921..e3f91d3576 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -347,7 +347,8 @@ void shadow_promote(struct domain *d, mfn_t gmfn, u32 type);
 void shadow_demote(struct domain *d, mfn_t gmfn, u32 type);
 
 /* Shadow page allocation functions */
-void  shadow_prealloc(struct domain *d, u32 shadow_type, unsigned int count);
+bool __must_check shadow_prealloc(struct domain *d, unsigned int shadow_type,
+                                  unsigned int count);
 mfn_t shadow_alloc(struct domain *d,
                     u32 shadow_type,
                     unsigned long backpointer);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:12:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:12:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431275.684127 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDum-0000vP-Kr; Fri, 28 Oct 2022 01:12:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431275.684127; Fri, 28 Oct 2022 01:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDum-0000vH-Hd; Fri, 28 Oct 2022 01:12:04 +0000
Received: by outflank-mailman (input) for mailman id 431275;
 Fri, 28 Oct 2022 01:12:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDul-0000vA-0L
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuk-0006ZL-Vr
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuk-00050r-VD
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Ot4tvCzfLEXiqJkEa5/olIpTzfMRy1UZL5/dL/DT0dc=; b=kc9etKixwRnNn3eyheAfeRrCt/
	o6d4dVbliI2+2/DLdQTE8bbqV8oI7nLq428/ip2dNQbGszPbmbikn0Zy6pZYqeiqA6A7Bh1MsSb6g
	peOpl45WpMVt3Rq1LSPN1ji+NKwDjMKiqkxjUbp24Ij9niGmjX1S4yTWCLItdKNuI2RM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] x86/p2m: refuse new allocations for dying domains
Message-Id: <E1ooDuk-00050r-VD@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:12:02 +0000

commit 08eec20dc0550316dad64cdc63fee2371702f31f
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:49:35 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:49:35 2022 +0200

    x86/p2m: refuse new allocations for dying domains
    
    This will in particular prevent any attempts to add entries to the p2m,
    once - in a subsequent change - non-root entries have been removed.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: ff600a8cf8e36f8ecbffecf96a035952e022ab87
    master date: 2022-10-11 14:23:22 +0200
---
 xen/arch/x86/mm/hap/hap.c       |  5 ++++-
 xen/arch/x86/mm/shadow/common.c | 18 ++++++++++++++----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d3931b4e49..cee8caa7aa 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -244,6 +244,9 @@ static struct page_info *hap_alloc(struct domain *d)
 
     ASSERT(paging_locked_by_me(d));
 
+    if ( unlikely(d->is_dying) )
+        return NULL;
+
     pg = page_list_remove_head(&d->arch.paging.hap.freelist);
     if ( unlikely(!pg) )
         return NULL;
@@ -280,7 +283,7 @@ static struct page_info *hap_alloc_p2m_page(struct domain *d)
         d->arch.paging.hap.p2m_pages++;
         ASSERT(!page_get_owner(pg) && !(pg->count_info & PGC_count_mask));
     }
-    else if ( !d->arch.paging.p2m_alloc_failed )
+    else if ( !d->arch.paging.p2m_alloc_failed && !d->is_dying )
     {
         d->arch.paging.p2m_alloc_failed = 1;
         dprintk(XENLOG_ERR, "d%i failed to allocate from HAP pool\n",
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ab8cf7aa8c..05d20b8b03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -907,6 +907,10 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
     if ( d->arch.paging.shadow.free_pages >= pages )
         return true;
 
+    if ( unlikely(d->is_dying) )
+        /* No reclaim when the domain is dying, teardown will take care of it. */
+        return false;
+
     /* Shouldn't have enabled shadows if we've no vcpus. */
     ASSERT(d->vcpu && d->vcpu[0]);
 
@@ -957,7 +961,7 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
            d->arch.paging.shadow.free_pages,
            d->arch.paging.shadow.p2m_pages);
 
-    ASSERT(d->is_dying);
+    ASSERT_UNREACHABLE();
 
     flush_tlb_mask(d->dirty_cpumask);
 
@@ -971,10 +975,13 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
  * to avoid freeing shadows that the caller is currently working on. */
 bool shadow_prealloc(struct domain *d, unsigned int type, unsigned int count)
 {
-    bool ret = _shadow_prealloc(d, shadow_size(type) * count);
+    bool ret;
 
-    if ( !ret && !d->is_dying &&
-         (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
+    if ( unlikely(d->is_dying) )
+       return false;
+
+    ret = _shadow_prealloc(d, shadow_size(type) * count);
+    if ( !ret && (!d->is_shutting_down || d->shutdown_code != SHUTDOWN_crash) )
         /*
          * Failing to allocate memory required for shadow usage can only result in
          * a domain crash, do it here rather that relying on every caller to do it.
@@ -1206,6 +1213,9 @@ shadow_alloc_p2m_page(struct domain *d)
 {
     struct page_info *pg = NULL;
 
+    if ( unlikely(d->is_dying) )
+       return NULL;
+
     /* This is called both from the p2m code (which never holds the
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:12:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:12:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431276.684131 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDuw-0000z9-ME; Fri, 28 Oct 2022 01:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431276.684131; Fri, 28 Oct 2022 01:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDuw-0000z1-JO; Fri, 28 Oct 2022 01:12:14 +0000
Received: by outflank-mailman (input) for mailman id 431276;
 Fri, 28 Oct 2022 01:12:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuv-0000xd-3W
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuv-0006ZV-2k
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDuv-00051H-1y
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Iw+2pj1KQSDz2aWgV1MGJSgrbBrImRmpJmZLYSPfDJw=; b=Pd8dj6id6eyhXOu1xJuPjL4UEJ
	TuAN3GjbaX3dSJ4pgujQawxGzgA6agVoTBLzJZb5fZtXt3iLYvRP9zcIPm8uskcgeIgPTrGu4JwPo
	hzYIUr2GefXD0sowwPq5iwaHzhepoPYF5PvWgkrm9S89ORu/Eg34vQArFf5Le2Ios9nY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] x86/p2m: truly free paging pool memory for dying domains
Message-Id: <E1ooDuv-00051H-1y@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:12:13 +0000

commit 6e537d36943e5b99afe6194b7fc147610bcf9fba
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:49:52 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:49:52 2022 +0200

    x86/p2m: truly free paging pool memory for dying domains
    
    Modify {hap,shadow}_free to free the page immediately if the domain is
    dying, so that pages don't accumulate in the pool when
    {shadow,hap}_final_teardown() get called. This is to limit the amount of
    work which needs to be done there (in a non-preemptable manner).
    
    Note the call to shadow_free() in shadow_free_p2m_page() is moved after
    increasing total_pages, so that the decrease done in shadow_free() in
    case the domain is dying doesn't underflow the counter, even if just for
    a short interval.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: f50a2c0e1d057c00d6061f40ae24d068226052ad
    master date: 2022-10-11 14:23:51 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 12 ++++++++++++
 xen/arch/x86/mm/shadow/common.c | 28 +++++++++++++++++++++++++---
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index cee8caa7aa..417b6ef37c 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -264,6 +264,18 @@ static void hap_free(struct domain *d, mfn_t mfn)
 
     ASSERT(paging_locked_by_me(d));
 
+    /*
+     * For dying domains, actually free the memory here. This way less work is
+     * left to hap_final_teardown(), which cannot easily have preemption checks
+     * added.
+     */
+    if ( unlikely(d->is_dying) )
+    {
+        free_domheap_page(pg);
+        d->arch.paging.hap.total_pages--;
+        return;
+    }
+
     d->arch.paging.hap.free_pages++;
     page_list_add_tail(pg, &d->arch.paging.hap.freelist);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 05d20b8b03..c178b9a5d8 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1155,6 +1155,7 @@ mfn_t shadow_alloc(struct domain *d,
 void shadow_free(struct domain *d, mfn_t smfn)
 {
     struct page_info *next = NULL, *sp = mfn_to_page(smfn);
+    bool dying = ACCESS_ONCE(d->is_dying);
     struct page_list_head *pin_list;
     unsigned int pages;
     u32 shadow_type;
@@ -1197,11 +1198,32 @@ void shadow_free(struct domain *d, mfn_t smfn)
          * just before the allocator hands the page out again. */
         page_set_tlbflush_timestamp(sp);
         perfc_decr(shadow_alloc_count);
-        page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
+        /*
+         * For dying domains, actually free the memory here. This way less
+         * work is left to shadow_final_teardown(), which cannot easily have
+         * preemption checks added.
+         */
+        if ( unlikely(dying) )
+        {
+            /*
+             * The backpointer field (sh.back) used by shadow code aliases the
+             * domain owner field, unconditionally clear it here to avoid
+             * free_domheap_page() attempting to parse it.
+             */
+            page_set_owner(sp, NULL);
+            free_domheap_page(sp);
+        }
+        else
+            page_list_add_tail(sp, &d->arch.paging.shadow.freelist);
+
         sp = next;
     }
 
-    d->arch.paging.shadow.free_pages += pages;
+    if ( unlikely(dying) )
+        d->arch.paging.shadow.total_pages -= pages;
+    else
+        d->arch.paging.shadow.free_pages += pages;
 }
 
 /* Divert a page from the pool to be used by the p2m mapping.
@@ -1271,9 +1293,9 @@ shadow_free_p2m_page(struct domain *d, struct page_info *pg)
      * paging lock) and the log-dirty code (which always does). */
     paging_lock_recursive(d);
 
-    shadow_free(d, page_to_mfn(pg));
     d->arch.paging.shadow.p2m_pages--;
     d->arch.paging.shadow.total_pages++;
+    shadow_free(d, page_to_mfn(pg));
 
     paging_unlock(d);
 }
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:12:24 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:12:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431277.684136 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDv6-00012q-QF; Fri, 28 Oct 2022 01:12:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431277.684136; Fri, 28 Oct 2022 01:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDv6-00012e-Mk; Fri, 28 Oct 2022 01:12:24 +0000
Received: by outflank-mailman (input) for mailman id 431277;
 Fri, 28 Oct 2022 01:12:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDv5-00012O-6c
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDv5-0006Zc-5t
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDv5-00051g-5K
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=vsA7vY2Fm8pUkzc9wdEQ2eq/8fLIO89U2X3f6M0jRKo=; b=vyYafhlrMddOIFN0+uArfsdpdr
	Vai87VuuhK4C0LhaB5ev/Wy1Cv344XbpZb768+cziF/Uf3/3Q98FBoJ9lAVYgZoURd+253xeVsZcr
	O7jEKDlDGLRvLQPPbwVnhPVVeXGGIKnhCdLV6U7GcUs8+1qH6FEJricCsF5hoTrMXpUg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] x86/p2m: free the paging memory pool preemptively
Message-Id: <E1ooDv5-00051g-5K@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:12:23 +0000

commit 3e7aa35a56f9e9b42c74724c4083026da8ac9bcd
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Tue Oct 11 15:50:10 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:50:10 2022 +0200

    x86/p2m: free the paging memory pool preemptively
    
    The paging memory pool is currently freed in two different places:
    from {shadow,hap}_teardown() via domain_relinquish_resources() and
    from {shadow,hap}_final_teardown() via complete_domain_destroy().
    While the former does handle preemption, the later doesn't.
    
    Attempt to move as much p2m related freeing as possible to happen
    before the call to {shadow,hap}_teardown(), so that most memory can be
    freed in a preemptive way.  In order to avoid causing issues to
    existing callers leave the root p2m page tables set and free them in
    {hap,shadow}_final_teardown().  Also modify {hap,shadow}_free to free
    the page immediately if the domain is dying, so that pages don't
    accumulate in the pool when {shadow,hap}_final_teardown() get called.
    
    Move altp2m_vcpu_disable_ve() to be done in hap_teardown(), as that's
    the place where altp2m_active gets disabled now.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: e7aa55c0aab36d994bf627c92bd5386ae167e16e
    master date: 2022-10-11 14:24:21 +0200
---
 xen/arch/x86/domain.c           |  7 -------
 xen/arch/x86/mm/hap/hap.c       | 39 +++++++++++++++++++++++++++++----------
 xen/arch/x86/mm/shadow/common.c | 16 ++++++++++++++++
 3 files changed, 45 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6199f36514..6996c6b06a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -38,7 +38,6 @@
 #include <xen/livepatch.h>
 #include <public/sysctl.h>
 #include <public/hvm/hvm_vcpu.h>
-#include <asm/altp2m.h>
 #include <asm/regs.h>
 #include <asm/mc146818rtc.h>
 #include <asm/system.h>
@@ -2098,12 +2097,6 @@ int domain_relinquish_resources(struct domain *d)
             vpmu_destroy(v);
         }
 
-        if ( altp2m_active(d) )
-        {
-            for_each_vcpu ( d, v )
-                altp2m_vcpu_disable_ve(v);
-        }
-
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 417b6ef37c..92b2014534 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -28,6 +28,7 @@
 #include <xen/domain_page.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
+#include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/page.h>
 #include <asm/current.h>
@@ -532,18 +533,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;
 
     if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
         for ( i = 0; i < MAX_ALTP2M; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true);
-    }
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -558,6 +549,8 @@ void hap_final_teardown(struct domain *d)
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
     ASSERT(d->arch.paging.hap.p2m_pages == 0);
+    ASSERT(d->arch.paging.hap.free_pages == 0);
+    ASSERT(d->arch.paging.hap.total_pages == 0);
     paging_unlock(d);
 }
 
@@ -565,6 +558,7 @@ void hap_teardown(struct domain *d, bool *preempted)
 {
     struct vcpu *v;
     mfn_t mfn;
+    unsigned int i;
 
     ASSERT(d->is_dying);
     ASSERT(d != current->domain);
@@ -586,6 +580,31 @@ void hap_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    /* Leave the root pt in case we get further attempts to modify the p2m. */
+    if ( hvm_altp2m_supported() )
+    {
+        if ( altp2m_active(d) )
+            for_each_vcpu ( d, v )
+                altp2m_vcpu_disable_ve(v);
+
+        d->arch.altp2m_active = 0;
+
+        FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+            p2m_teardown(d->arch.altp2m_p2m[i], false);
+    }
+
+    /* Destroy nestedp2m's after altp2m. */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+        p2m_teardown(d->arch.nested_p2m[i], false);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
     if ( d->arch.paging.hap.total_pages != 0 )
     {
         hap_set_allocation(d, 0, preempted);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index c178b9a5d8..8679620f18 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2791,6 +2791,19 @@ void shadow_teardown(struct domain *d, bool *preempted)
         }
     }
 
+    paging_unlock(d);
+
+    p2m_teardown(p2m_get_hostp2m(d), false);
+
+    paging_lock(d);
+
+    /*
+     * Reclaim all shadow memory so that shadow_set_allocation() doesn't find
+     * in-use pages, as _shadow_prealloc() will no longer try to reclaim pages
+     * because the domain is dying.
+     */
+    shadow_blow_tables(d);
+
 #if (SHADOW_OPTIMIZATIONS & (SHOPT_VIRTUAL_TLB|SHOPT_OUT_OF_SYNC))
     /* Free the virtual-TLB array attached to each vcpu */
     for_each_vcpu(d, v)
@@ -2909,6 +2922,9 @@ void shadow_final_teardown(struct domain *d)
                    d->arch.paging.shadow.total_pages,
                    d->arch.paging.shadow.free_pages,
                    d->arch.paging.shadow.p2m_pages);
+    ASSERT(!d->arch.paging.shadow.total_pages);
+    ASSERT(!d->arch.paging.shadow.free_pages);
+    ASSERT(!d->arch.paging.shadow.p2m_pages);
     paging_unlock(d);
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:12:34 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431278.684139 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvG-00015W-RQ; Fri, 28 Oct 2022 01:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431278.684139; Fri, 28 Oct 2022 01:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvG-00015M-OV; Fri, 28 Oct 2022 01:12:34 +0000
Received: by outflank-mailman (input) for mailman id 431278;
 Fri, 28 Oct 2022 01:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvF-00015C-9y
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvF-0006aA-9C
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvF-000529-8Y
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=MD/pt7UKGZrNGkwmPA5wCpy15Tfi+++42R92Sfwv3AU=; b=CylgFxv1PknVn7HsxitJkrnIeZ
	CLFh6f6CyjCSxqulVAHPcBLx7gedHnIPIx+HvdM7sqi6zg+hWJu2/otJBRdADAlrySls/RWEHCFM5
	dnT7RIMCfSGt5udXH9wzbtrylFRxF3WriyULlBR6QFoxduzE9Nt3dI/NIukFEBcfPYqA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] xen/x86: p2m: Add preemption in p2m_teardown()
Message-Id: <E1ooDvF-000529-8Y@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:12:33 +0000

commit eed4ef4177b8267f2b6f403db00ed393a371285f
Author:     Julien Grall <jgrall@amazon.com>
AuthorDate: Tue Oct 11 15:50:28 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:50:28 2022 +0200

    xen/x86: p2m: Add preemption in p2m_teardown()
    
    The list p2m->pages contain all the pages used by the P2M. On large
    instance this can be quite large and the time spent to call
    d->arch.paging.free_page() will take more than 1ms for a 80GB guest
    on a Xen running in nested environment on a c5.metal.
    
    By extrapolation, it would take > 100ms for a 8TB guest (what we
    current security support). So add some preemption in p2m_teardown()
    and propagate to the callers. Note there are 3 places where
    the preemption is not enabled:
        - hap_final_teardown()/shadow_final_teardown(): We are
          preventing update the P2M once the domain is dying (so
          no more pages could be allocated) and most of the P2M pages
          will be freed in preemptive manneer when relinquishing the
          resources. So this is fine to disable preemption.
        - shadow_enable(): This is fine because it will undo the allocation
          that may have been made by p2m_alloc_table() (so only the root
          page table).
    
    The preemption is arbitrarily checked every 1024 iterations.
    
    We now need to include <xen/event.h> in p2m-basic in order to
    import the definition for local_events_need_delivery() used by
    general_preempt_check(). Ideally, the inclusion should happen in
    xen/sched.h but it opened a can of worms.
    
    Note that with the current approach, Xen doesn't keep track on whether
    the alt/nested P2Ms have been cleared. So there are some redundant work.
    However, this is not expected to incurr too much overhead (the P2M lock
    shouldn't be contended during teardown). So this is optimization is
    left outside of the security event.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 8a2111250b424edc49c65c4d41b276766d30635c
    master date: 2022-10-11 14:24:48 +0200
---
 xen/arch/x86/mm/hap/hap.c       | 22 ++++++++++++++++------
 xen/arch/x86/mm/p2m.c           | 18 +++++++++++++++---
 xen/arch/x86/mm/shadow/common.c | 12 +++++++++---
 xen/include/asm-x86/p2m.h       |  2 +-
 4 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 92b2014534..34bbe50be0 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -534,17 +534,17 @@ void hap_final_teardown(struct domain *d)
 
     if ( hvm_altp2m_supported() )
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true);
+            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
-        p2m_teardown(d->arch.nested_p2m[i], true);
+        p2m_teardown(d->arch.nested_p2m[i], true, NULL);
     }
 
     if ( d->arch.paging.hap.total_pages != 0 )
         hap_teardown(d, NULL);
 
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any memory that the p2m teardown released */
     paging_lock(d);
     hap_set_allocation(d, 0, NULL);
@@ -594,14 +594,24 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], false);
+        {
+            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            if ( preempted && *preempted )
+                return;
+        }
     }
 
     /* Destroy nestedp2m's after altp2m. */
     for ( i = 0; i < MAX_NESTEDP2M; i++ )
-        p2m_teardown(d->arch.nested_p2m[i], false);
+    {
+        p2m_teardown(d->arch.nested_p2m[i], false, preempted);
+        if ( preempted && *preempted )
+            return;
+    }
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 859edfc95b..5bc2e483a3 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -737,12 +737,13 @@ int p2m_alloc_table(struct p2m_domain *p2m)
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted)
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
     struct page_info *pg, *root_pg = NULL;
     struct domain *d;
+    unsigned int i = 0;
 
     if (p2m == NULL)
         return;
@@ -761,8 +762,19 @@ void p2m_teardown(struct p2m_domain *p2m, bool remove_root)
     }
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        if ( pg != root_pg )
-            d->arch.paging.free_page(d, pg);
+    {
+        if ( pg == root_pg )
+            continue;
+
+        d->arch.paging.free_page(d, pg);
+
+        /* Arbitrarily check preemption every 1024 iterations */
+        if ( preempted && !(++i % 1024) && general_preempt_check() )
+        {
+            *preempted = true;
+            break;
+        }
+    }
 
     if ( root_pg )
         page_list_add(root_pg, &p2m->pages);
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8679620f18..e6af359579 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2747,8 +2747,12 @@ int shadow_enable(struct domain *d, u32 mode)
  out_locked:
     paging_unlock(d);
  out_unlocked:
+    /*
+     * This is fine to ignore the preemption here because only the root
+     * will be allocated by p2m_alloc_table().
+     */
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
-        p2m_teardown(p2m, true);
+        p2m_teardown(p2m, true, NULL);
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
@@ -2793,7 +2797,9 @@ void shadow_teardown(struct domain *d, bool *preempted)
 
     paging_unlock(d);
 
-    p2m_teardown(p2m_get_hostp2m(d), false);
+    p2m_teardown(p2m_get_hostp2m(d), false, preempted);
+    if ( preempted && *preempted )
+        return;
 
     paging_lock(d);
 
@@ -2912,7 +2918,7 @@ void shadow_final_teardown(struct domain *d)
         shadow_teardown(d, NULL);
 
     /* It is now safe to pull down the p2m map. */
-    p2m_teardown(p2m_get_hostp2m(d), true);
+    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
     /* Free any shadow memory that the p2m teardown released */
     paging_lock(d);
     shadow_set_allocation(d, 0, NULL);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index cab4ca60fa..8ba8cd6a02 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -599,7 +599,7 @@ int p2m_init(struct domain *d);
 int p2m_alloc_table(struct p2m_domain *p2m);
 
 /* Return all the p2m resources to Xen. */
-void p2m_teardown(struct p2m_domain *p2m, bool remove_root);
+void p2m_teardown(struct p2m_domain *p2m, bool remove_root, bool *preempted);
 void p2m_final_teardown(struct domain *d);
 
 /* Add a page to a domain's p2m table */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:12:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:12:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431279.684142 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvQ-00018g-Sh; Fri, 28 Oct 2022 01:12:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431279.684142; Fri, 28 Oct 2022 01:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvQ-00018Y-Q7; Fri, 28 Oct 2022 01:12:44 +0000
Received: by outflank-mailman (input) for mailman id 431279;
 Fri, 28 Oct 2022 01:12:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvP-00018M-Dp
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvP-0006aK-DA
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvP-00052j-Bj
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=HXfFAiq1XP7LUtrpAZHsAm1AjlUFW/7AsfYm2pynWZ4=; b=xBO9RBiPG29vuv4eJo+X4Sx+D9
	FuQ0YvkVdoc0mZCbC1sROcTDpACsuOpJ4f5ZLKdciVEp12rfrAZzeSCyv1VlXE7OHebJpaTYjVXg8
	qFNy3jhLJJrm168xQRiQu+GC2X/CNf5g6Pm7wuZtY0q4emBzbJqPT3qJQhJ/EoZm+pp0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] libxl, docs: Use arch-specific default paging memory
Message-Id: <E1ooDvP-00052j-Bj@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:12:43 +0000

commit 9992c089de1fbb4d3217d2421ca60295998645d7
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:51:26 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:51:26 2022 +0200

    libxl, docs: Use arch-specific default paging memory
    
    The default paging memory (descibed in `shadow_memory` entry in xl
    config) in libxl is used to determine the memory pool size for xl
    guests. Currently this size is only used for x86, and contains a part
    of RAM to shadow the resident processes. Since on Arm there is no
    shadow mode guests, so the part of RAM to shadow the resident processes
    is not necessary. Therefore, this commit splits the function
    `libxl_get_required_shadow_memory()` to arch specific helpers and
    renamed the helper to `libxl__arch_get_required_paging_memory()`.
    
    On x86, this helper calls the original value from
    `libxl_get_required_shadow_memory()` so no functional change intended.
    
    On Arm, this helper returns 1MB per vcpu plus 4KB per MiB of RAM
    for the P2M map and additional 512KB.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes and correct the comment style following Xen
    coding style.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    master commit: 156a239ea288972425f967ac807b3cb5b5e14874
    master date: 2022-10-11 14:28:37 +0200
---
 docs/man/xl.cfg.5.pod.in  |  5 +++++
 tools/libxl/libxl_arch.h  |  4 ++++
 tools/libxl/libxl_arm.c   | 12 ++++++++++++
 tools/libxl/libxl_utils.c |  9 ++-------
 tools/libxl/libxl_x86.c   | 12 ++++++++++++
 5 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 245d3f9472..3b297c6a97 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1790,6 +1790,11 @@ are not using hardware assisted paging (i.e. you are using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is 1MB per vCPU plus 4KB per MB of RAM for
+the P2M map. Users should adjust this value if bigger P2M pool size is
+needed.
+
 =back
 
 =head3 Processor and Platform Features
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 6a91775b9e..b09f868490 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -83,6 +83,10 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..f4b3dc8e71 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -153,6 +153,18 @@ out:
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + maxmem_kb / 1024);
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index b039143b8a..e18b1524ef 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -18,6 +18,7 @@
 #include <ctype.h>
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 #include "_paths.h"
 
 #ifndef LIBXL_HAVE_NONCONST_LIBXL_BASENAME_RETURN_VALUE
@@ -39,13 +40,7 @@ char *libxl_basename(const char *name)
 
 unsigned long libxl_get_required_shadow_memory(unsigned long maxmem_kb, unsigned int smp_cpus)
 {
-    /* 256 pages (1MB) per vcpu,
-       plus 1 page per MiB of RAM for the P2M map,
-       plus 1 page per MiB of RAM to shadow the resident processes.
-       This is higher than the minimum that Xen would allocate if no value
-       were given (but the Xen minimum is for safety, not performance).
-     */
-    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+    return libxl__arch_get_required_paging_memory(maxmem_kb, smp_cpus);
 }
 
 char *libxl_domid_to_name(libxl_ctx *ctx, uint32_t domid)
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index f34c0edc10..348876e5c0 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -681,6 +681,18 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
     return rc;
 }
 
+unsigned long libxl__arch_get_required_paging_memory(unsigned long maxmem_kb,
+                                                     unsigned int smp_cpus)
+{
+    /*
+     * 256 pages (1MB) per vcpu,
+     * plus 1 page per MiB of RAM for the P2M map,
+     * plus 1 page per MiB of RAM to shadow the resident processes.
+     * This is higher than the minimum that Xen would allocate if no value
+     * were given (but the Xen minimum is for safety, not performance).
+     */
+    return 4 * (256 * smp_cpus + 2 * (maxmem_kb / 1024));
+}
 
 /*
  * Local variables:
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:12:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:12:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431280.684146 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDva-0001Ay-UF; Fri, 28 Oct 2022 01:12:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431280.684146; Fri, 28 Oct 2022 01:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDva-0001Ar-Rg; Fri, 28 Oct 2022 01:12:54 +0000
Received: by outflank-mailman (input) for mailman id 431280;
 Fri, 28 Oct 2022 01:12:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvZ-0001Al-Gx
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvZ-0006aX-GH
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvZ-00053K-Fe
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:12:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=BeOzCVK5iLulmzyTC3k/3YuGhdKwSw5rEheq8pmXj2g=; b=lppIOHDibypwpmt0MTZQ82ExmT
	FqGYC/ahGnB6KpjsmYstQDkW1HRTFF2mb9c7zhMKmcRJKljgST6tBdFCws3OzCCF/t1YyR6Eob3yb
	MOUyM2Nj5b3gNdKA6sq043QxnRhuqV0LRSlY1ZiXF+rCqoTHd4AWEWlB00HqEF/Z6+gI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] xen/arm: Construct the P2M pages pool for guests
Message-Id: <E1ooDvZ-00053K-Fe@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:12:53 +0000

commit 2ae9bbef0f84a025719382ffcf44882b76316d62
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:51:45 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:51:45 2022 +0200

    xen/arm: Construct the P2M pages pool for guests
    
    This commit constructs the p2m pages pool for guests from the
    data structure and helper perspective.
    
    This is implemented by:
    
    - Adding a `struct paging_domain` which contains a freelist, a
    counter variable and a spinlock to `struct arch_domain` to
    indicate the free p2m pages and the number of p2m total pages in
    the p2m pages pool.
    
    - Adding a helper `p2m_get_allocation` to get the p2m pool size.
    
    - Adding a helper `p2m_set_allocation` to set the p2m pages pool
    size. This helper should be called before allocating memory for
    a guest.
    
    - Adding a helper `p2m_teardown_allocation` to free the p2m pages
    pool. This helper should be called during the xl domain destory.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 55914f7fc91a468649b8a3ec3f53ae1c4aca6670
    master date: 2022-10-11 14:28:39 +0200
---
 xen/arch/arm/p2m.c           | 88 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/domain.h | 10 +++++
 xen/include/asm-arm/p2m.h    |  4 ++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 42638787a2..7d6fec7887 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -53,6 +53,92 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+/* Return the size of the pool, rounded up to the nearest MB */
+unsigned int p2m_get_allocation(struct domain *d)
+{
+    unsigned long nr_pages = ACCESS_ONCE(d->arch.paging.p2m_total_pages);
+
+    return ROUNDUP(nr_pages, 1 << (20 - PAGE_SHIFT)) >> (20 - PAGE_SHIFT);
+}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+    struct page_info *pg;
+
+    ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+    for ( ; ; )
+    {
+        if ( d->arch.paging.p2m_total_pages < pages )
+        {
+            /* Need to allocate more memory from domheap */
+            pg = alloc_domheap_page(NULL, 0);
+            if ( pg == NULL )
+            {
+                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
+                return -ENOMEM;
+            }
+            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                d->arch.paging.p2m_total_pages + 1;
+            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+        }
+        else if ( d->arch.paging.p2m_total_pages > pages )
+        {
+            /* Need to return memory to domheap */
+            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+            if( pg )
+            {
+                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
+                    d->arch.paging.p2m_total_pages - 1;
+                free_domheap_page(pg);
+            }
+            else
+            {
+                printk(XENLOG_ERR
+                       "Failed to free P2M pages, P2M freelist is empty.\n");
+                return -ENOMEM;
+            }
+        }
+        else
+            break;
+
+        /* Check to see if we need to yield and try again */
+        if ( preempted && general_preempt_check() )
+        {
+            *preempted = true;
+            return -ERESTART;
+        }
+    }
+
+    return 0;
+}
+
+int p2m_teardown_allocation(struct domain *d)
+{
+    int ret = 0;
+    bool preempted = false;
+
+    spin_lock(&d->arch.paging.lock);
+    if ( d->arch.paging.p2m_total_pages != 0 )
+    {
+        ret = p2m_set_allocation(d, 0, &preempted);
+        if ( preempted )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return -ERESTART;
+        }
+        ASSERT(d->arch.paging.p2m_total_pages == 0);
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return ret;
+}
+
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
@@ -1567,7 +1653,9 @@ int p2m_init(struct domain *d)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
+    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9b44a9648c..7bc14c2e9e 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -42,6 +42,14 @@ struct vtimer {
         uint64_t cval;
 };
 
+struct paging_domain {
+    spinlock_t lock;
+    /* Free P2M pages from the pre-allocated P2M pool */
+    struct page_list_head p2m_freelist;
+    /* Number of pages from the pre-allocated P2M pool */
+    unsigned long p2m_total_pages;
+};
+
 struct arch_domain
 {
 #ifdef CONFIG_ARM_64
@@ -53,6 +61,8 @@ struct arch_domain
 
     struct hvm_domain hvm;
 
+    struct paging_domain paging;
+
     struct vmmio vmmio;
 
     /* Continuable domain_relinquish_resources(). */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 20df621271..b1c9b947bb 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -197,6 +197,10 @@ void p2m_restore_state(struct vcpu *n);
 /* Print debugging/statistial info about a domain's p2m */
 void p2m_dump_info(struct domain *d);
 
+unsigned int p2m_get_allocation(struct domain *d);
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted);
+int p2m_teardown_allocation(struct domain *d);
+
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
     write_lock(&p2m->lock);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:13:05 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:13:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431281.684152 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvl-0001EZ-1o; Fri, 28 Oct 2022 01:13:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431281.684152; Fri, 28 Oct 2022 01:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvk-0001ER-Ub; Fri, 28 Oct 2022 01:13:04 +0000
Received: by outflank-mailman (input) for mailman id 431281;
 Fri, 28 Oct 2022 01:13:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvj-0001EE-K7
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvj-0006ao-JK
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvj-00053z-Ib
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=0Z0NdNZ8cHI7o1K0415CUydndyUDhhsJGB0iJ9LdI8E=; b=eF8Vit/OdhR2bk4PnBK+Iu2/nG
	E7St8Prq5gNHIPRE6TE8URkb1DQHRfHUmUG88+hV4EIlfsm7EMqmZN7q2cqXdrdHWfnpqIC7IxUXg
	8raGY4mM/+jad5wh6WBbJqT+CPAMvlCxT4uSL2VM0wcnojpwetGlt5MHdG4iJ+TKjQzc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
Message-Id: <E1ooDvj-00053z-Ib@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:13:03 +0000

commit e6b1e3892b685346490eded1f6b6f5392b1020b0
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:52:02 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:52:02 2022 +0200

    xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm
    
    This commit implements the `XEN_DOMCTL_shadow_op` support in Xen
    for Arm. The p2m pages pool size for xl guests is supposed to be
    determined by `XEN_DOMCTL_shadow_op`. Hence, this commit:
    
    - Introduces a function `p2m_domctl` and implements the subops
    `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` and
    `XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION` of `XEN_DOMCTL_shadow_op`.
    
    - Adds the `XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION` support in libxl.
    
    Therefore enabling the setting of shadow memory pool size
    when creating a guest from xl and getting shadow memory pool size
    from Xen.
    
    Note that the `XEN_DOMCTL_shadow_op` added in this commit is only
    a dummy op, and the functionality of setting/getting p2m memory pool
    size for xl guests will be added in following commits.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cf2a68d2ffbc3ce95e01449d46180bddb10d24a0
    master date: 2022-10-11 14:28:42 +0200
---
 tools/libxl/libxl_arm.c | 12 ++++++++++++
 xen/arch/arm/domctl.c   | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index f4b3dc8e71..025df1bfd0 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -130,6 +130,18 @@ int libxl__arch_domain_save_config(libxl__gc *gc,
 int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+
+    int r = xc_shadow_control(ctx->xch, domid,
+                              XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
+                              &shadow_mb, 0);
+    if (r) {
+        LOGED(ERROR, domid,
+              "Failed to set %u MiB shadow allocation", shadow_mb);
+        return ERROR_FAIL;
+    }
+
     return 0;
 }
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 9da88b8c64..ef1299ae1c 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -45,11 +45,43 @@ static int handle_vuart_init(struct domain *d,
     return rc;
 }
 
+static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
+                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    if ( unlikely(d == current->domain) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
+        return -EINVAL;
+    }
+
+    if ( unlikely(d->is_dying) )
+    {
+        printk(XENLOG_ERR "Tried to do a p2m domctl op on dying domain %u\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    switch ( sc->op )
+    {
+    case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
+        return 0;
+    case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+        return 0;
+    default:
+    {
+        printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
+        return -EINVAL;
+    }
+    }
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_shadow_op:
+        return p2m_domctl(d, &domctl->u.shadow_op, u_domctl);
     case XEN_DOMCTL_cacheflush:
     {
         gfn_t s = _gfn(domctl->u.cacheflush.start_pfn);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:13:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:13:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431282.684155 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvv-0001HK-2t; Fri, 28 Oct 2022 01:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431282.684155; Fri, 28 Oct 2022 01:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDvv-0001HD-03; Fri, 28 Oct 2022 01:13:15 +0000
Received: by outflank-mailman (input) for mailman id 431282;
 Fri, 28 Oct 2022 01:13:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvt-0001H0-N8
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvt-0006as-MO
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDvt-00054O-Lm
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=UzUdstmAPLnzXB1/K5o7qKM5iFN6B6PNQfrcLTAH7AQ=; b=xD74QtSdGDnvjLjL6HCrHu74Ta
	7CoGAvtS2HOkr1Eq7GkvvwUeH4nrPKnkAJZm4WM6Aig4Zf2akVVrbtUH9UxH1GJTI5rG5Exa4pyTY
	4qsbd8iztZDsuQFPz6Ue91Y2zHWVjXC7hvJ54AVGv70b7uP0b5LL3nBE9QcqrsBkLowo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] xen/arm: Allocate and free P2M pages from the P2M pool
Message-Id: <E1ooDvt-00054O-Lm@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:13:13 +0000

commit 867fcf6ca2e6a5dfb490bc5a1bd9b36d8ba88531
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 11 15:52:18 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:52:18 2022 +0200

    xen/arm: Allocate and free P2M pages from the P2M pool
    
    This commit sets/tearsdown of p2m pages pool for non-privileged Arm
    guests by calling `p2m_set_allocation` and `p2m_teardown_allocation`.
    
    - For dom0, P2M pages should come from heap directly instead of p2m
    pool, so that the kernel may take advantage of the extended regions.
    
    - For xl guests, the setting of the p2m pool is called in
    `XEN_DOMCTL_shadow_op` and the p2m pool is destroyed in
    `domain_relinquish_resources`. Note that domctl->u.shadow_op.mb is
    updated with the new size when setting the p2m pool.
    
    - For dom0less domUs, the setting of the p2m pool is called before
    allocating memory during domain creation. Users can specify the p2m
    pool size by `xen,domain-p2m-mem-mb` dts property.
    
    To actually allocate/free pages from the p2m pool, this commit adds
    two helper functions namely `p2m_alloc_page` and `p2m_free_page` to
    `struct p2m_domain`. By replacing the `alloc_domheap_page` and
    `free_domheap_page` with these two helper functions, p2m pages can
    be added/removed from the list of p2m pool rather than from the heap.
    
    Since page from `p2m_alloc_page` is cleaned, take the opportunity
    to remove the redundant `clean_page` in `p2m_create_table`.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7
    master date: 2022-10-11 14:28:44 +0200
---
 docs/misc/arm/device-tree/booting.txt |  8 +++++
 xen/arch/arm/domain.c                 |  8 +++++
 xen/arch/arm/domain_build.c           | 29 ++++++++++++++++++
 xen/arch/arm/domctl.c                 | 23 +++++++++++++-
 xen/arch/arm/p2m.c                    | 57 ++++++++++++++++++++++++++++++++---
 xen/include/asm-arm/domain.h          |  1 +
 6 files changed, 121 insertions(+), 5 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..470c9491a7 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -164,6 +164,14 @@ with the following properties:
     Both #address-cells and #size-cells need to be specified because
     both sub-nodes (described shortly) have reg properties.
 
+- xen,domain-p2m-mem-mb
+
+    Optional. A 32-bit integer specifying the amount of megabytes of RAM
+    used for the domain P2M pool. This is in-sync with the shadow_memory
+    option in xl.cfg. Leaving this field empty in device tree will lead to
+    the default size of domain P2M pool, i.e. 1MB per guest vCPU plus 4KB
+    per MB of guest RAM plus 512KB for guest extended regions.
+
 Under the "xen,domain" compatible node, one or more sub-nodes are present
 for the DomU kernel and ramdisk.
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 1e24a7dbb4..31abe7d6f9 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1022,6 +1022,14 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+        d->arch.relmem = RELMEM_p2m_pool;
+        /* Fallthrough */
+
+    case RELMEM_p2m_pool:
+        ret = p2m_teardown_allocation(d);
+        if( ret )
+            return ret;
+
         d->arch.relmem = RELMEM_done;
         /* Fallthrough */
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index ce7f61e825..eb859600e5 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2327,6 +2327,21 @@ static void __init find_gnttab_region(struct domain *d,
            kinfo->gnttab_start, kinfo->gnttab_start + kinfo->gnttab_size);
 }
 
+static unsigned long __init domain_p2m_pages(unsigned long maxmem_kb,
+                                             unsigned int smp_cpus)
+{
+    /*
+     * Keep in sync with libxl__get_required_paging_memory().
+     * 256 pages (1MB) per vcpu, plus 1 page per MiB of RAM for the P2M map,
+     * plus 128 pages to cover extended regions.
+     */
+    unsigned long memkb = 4 * (256 * smp_cpus + (maxmem_kb / 1024) + 128);
+
+    BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
+
+    return DIV_ROUND_UP(memkb, 1024) << (20 - PAGE_SHIFT);
+}
+
 static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -2418,6 +2433,8 @@ static int __init construct_domU(struct domain *d,
     struct kernel_info kinfo = {};
     int rc;
     u64 mem;
+    u32 p2m_mem_mb;
+    unsigned long p2m_pages;
 
     rc = dt_property_read_u64(node, "memory", &mem);
     if ( !rc )
@@ -2427,6 +2444,18 @@ static int __init construct_domU(struct domain *d,
     }
     kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
 
+    rc = dt_property_read_u32(node, "xen,domain-p2m-mem-mb", &p2m_mem_mb);
+    /* If xen,domain-p2m-mem-mb is not specified, use the default value. */
+    p2m_pages = rc ?
+                p2m_mem_mb << (20 - PAGE_SHIFT) :
+                domain_p2m_pages(mem, d->max_vcpus);
+
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, p2m_pages, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc != 0 )
+        return rc;
+
     printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index ef1299ae1c..dab3da3a23 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -48,6 +48,9 @@ static int handle_vuart_init(struct domain *d,
 static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long rc;
+    bool preempted = false;
+
     if ( unlikely(d == current->domain) )
     {
         printk(XENLOG_ERR "Tried to do a p2m domctl op on itself.\n");
@@ -64,9 +67,27 @@ static long p2m_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
     switch ( sc->op )
     {
     case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
-        return 0;
+    {
+        /* Allow and handle preemption */
+        spin_lock(&d->arch.paging.lock);
+        rc = p2m_set_allocation(d, sc->mb << (20 - PAGE_SHIFT), &preempted);
+        spin_unlock(&d->arch.paging.lock);
+
+        if ( preempted )
+            /* Not finished. Set up to re-run the call. */
+            rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h",
+                                               u_domctl);
+        else
+            /* Finished. Return the new allocation. */
+            sc->mb = p2m_get_allocation(d);
+
+        return rc;
+    }
     case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
+    {
+        sc->mb = p2m_get_allocation(d);
         return 0;
+    }
     default:
     {
         printk(XENLOG_ERR "Bad p2m domctl op %u\n", sc->op);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7d6fec7887..3196690544 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -53,6 +53,54 @@ static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
     return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48));
 }
 
+static struct page_info *p2m_alloc_page(struct domain *d)
+{
+    struct page_info *pg;
+
+    spin_lock(&d->arch.paging.lock);
+    /*
+     * For hardware domain, there should be no limit in the number of pages that
+     * can be allocated, so that the kernel may take advantage of the extended
+     * regions. Hence, allocate p2m pages for hardware domains from heap.
+     */
+    if ( is_hardware_domain(d) )
+    {
+        pg = alloc_domheap_page(NULL, 0);
+        if ( pg == NULL )
+        {
+            printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n");
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+    }
+    else
+    {
+        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
+        if ( unlikely(!pg) )
+        {
+            spin_unlock(&d->arch.paging.lock);
+            return NULL;
+        }
+        d->arch.paging.p2m_total_pages--;
+    }
+    spin_unlock(&d->arch.paging.lock);
+
+    return pg;
+}
+
+static void p2m_free_page(struct domain *d, struct page_info *pg)
+{
+    spin_lock(&d->arch.paging.lock);
+    if ( is_hardware_domain(d) )
+        free_domheap_page(pg);
+    else
+    {
+        d->arch.paging.p2m_total_pages++;
+        page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
+    }
+    spin_unlock(&d->arch.paging.lock);
+}
+
 /* Return the size of the pool, rounded up to the nearest MB */
 unsigned int p2m_get_allocation(struct domain *d)
 {
@@ -754,7 +802,7 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry)
 
     ASSERT(!p2m_is_valid(*entry));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( page == NULL )
         return -ENOMEM;
 
@@ -874,7 +922,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     pg = mfn_to_page(mfn);
 
     page_list_del(pg, &p2m->pages);
-    free_domheap_page(pg);
+    p2m_free_page(p2m->domain, pg);
 }
 
 static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
@@ -898,7 +946,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
     ASSERT(level < target);
     ASSERT(p2m_is_superpage(*entry, level));
 
-    page = alloc_domheap_page(NULL, 0);
+    page = p2m_alloc_page(p2m->domain);
     if ( !page )
         return false;
 
@@ -1609,7 +1657,7 @@ int p2m_teardown(struct domain *d)
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
-        free_domheap_page(pg);
+        p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
@@ -1633,6 +1681,7 @@ void p2m_final_teardown(struct domain *d)
         return;
 
     ASSERT(page_list_empty(&p2m->pages));
+    ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7bc14c2e9e..dc5b26d15e 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -73,6 +73,7 @@ struct arch_domain
         RELMEM_page,
         RELMEM_mapping,
         RELMEM_p2m,
+        RELMEM_p2m_pool,
         RELMEM_done,
     } relmem;
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:13:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:13:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431283.684158 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDw5-0001KB-4P; Fri, 28 Oct 2022 01:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431283.684158; Fri, 28 Oct 2022 01:13:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDw5-0001K3-1i; Fri, 28 Oct 2022 01:13:25 +0000
Received: by outflank-mailman (input) for mailman id 431283;
 Fri, 28 Oct 2022 01:13:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDw3-0001Jj-Pw
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDw3-0006bD-PE
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDw3-00054r-Og
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:23 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=8OSgdeHHmw1LqaasltIWkLUKT0dQQ7v2PIpQ1Q9sCTk=; b=hE0jQrf0tHKK2uvMfGrqQOx79I
	VcEixl3k2RWhQI6NsZHIHkCVlwEAOI3rytVEq1tEXONL8taNQWQqSpGaQxTbFcgjbYLdJ042O6GpB
	vAHotA+UJcmj6gZXyUit87WmUUZMVOrg40lJPJyLMTb/ywmMoASg0PkIEelBzWlKb4zE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] gnttab: correct locking on transitive grant copy error path
Message-Id: <E1ooDw3-00054r-Og@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:13:23 +0000

commit 042de0843936b690acbc6dbcf57d26f6adccfc06
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Tue Oct 11 15:53:28 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Tue Oct 11 15:53:28 2022 +0200

    gnttab: correct locking on transitive grant copy error path
    
    While the comment next to the lock dropping in preparation of
    recursively calling acquire_grant_for_copy() mistakenly talks about the
    rd == td case (excluded a few lines further up), the same concerns apply
    to the calling of release_grant_for_copy() on a subsequent error path.
    
    This is CVE-2022-33748 / XSA-411.
    
    Fixes: ad48fb963dbf ("gnttab: fix transitive grant handling")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6e3aab858eef614a21a782a3b73acc88e74690ea
    master date: 2022-10-11 14:29:30 +0200
---
 xen/common/grant_table.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 709509e0fc..d242c08038 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2584,9 +2584,8 @@ acquire_grant_for_copy(
                      trans_domid);
 
         /*
-         * acquire_grant_for_copy() could take the lock on the
-         * remote table (if rd == td), so we have to drop the lock
-         * here and reacquire.
+         * acquire_grant_for_copy() will take the lock on the remote table,
+         * so we have to drop the lock here and reacquire.
          */
         active_entry_release(act);
         grant_read_unlock(rgt);
@@ -2623,11 +2622,25 @@ acquire_grant_for_copy(
                           act->trans_gref != trans_gref ||
                           !act->is_sub_page)) )
         {
+            /*
+             * Like above for acquire_grant_for_copy() we need to drop and then
+             * re-acquire the locks here to prevent lock order inversion issues.
+             * Unlike for acquire_grant_for_copy() we don't need to re-check
+             * anything, as release_grant_for_copy() doesn't depend on the grant
+             * table entry: It only updates internal state and the status flags.
+             */
+            active_entry_release(act);
+            grant_read_unlock(rgt);
+
             release_grant_for_copy(td, trans_gref, readonly);
-            fixup_status_for_copy_pin(rd, act, status);
             rcu_unlock_domain(td);
+
+            grant_read_lock(rgt);
+            act = active_entry_acquire(rgt, gref);
+            fixup_status_for_copy_pin(rd, act, status);
             active_entry_release(act);
             grant_read_unlock(rgt);
+
             put_page(*page);
             *page = NULL;
             return ERESTART;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:13:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:13:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431284.684163 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDwF-0001Ml-6C; Fri, 28 Oct 2022 01:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431284.684163; Fri, 28 Oct 2022 01:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDwF-0001Md-3B; Fri, 28 Oct 2022 01:13:35 +0000
Received: by outflank-mailman (input) for mailman id 431284;
 Fri, 28 Oct 2022 01:13:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwD-0001MT-Se
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwD-0006bP-S1
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwD-00055I-RP
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=qiC4y/Tl/CLigEKzcZxZtFb65vKTaV7bCMEKfaQonKU=; b=GG2a5iv9KlPkzE3V52+hRkTp26
	W2Z1cCzarzl6yaqTu6/m99ns7wRw9NyjTX42CXkAkzu1AdxFhR6wMBytKBuKVQWSQB4TIh0bxJlMX
	B83b3dLNHZ6bQV0j3xiqZpCuNG/0rOEKFcCUBxVFOOijvO8PAHDlCpP8lINBTrsqIkeY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] libxl/Arm: correct xc_shadow_control() invocation to fix build
Message-Id: <E1ooDwD-00055I-RP@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:13:33 +0000

commit 0be63c2615b268001f7cc9b72ce25eed952737dc
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Wed Oct 12 17:36:48 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 12 17:36:48 2022 +0200

    libxl/Arm: correct xc_shadow_control() invocation to fix build
    
    The backport didn't adapt to the earlier function prototype taking more
    (unused here) arguments.
    
    Fixes: c5215044578e ("xen/arm, libxl: Implement XEN_DOMCTL_shadow_op for Arm")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libxl/libxl_arm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 025df1bfd0..79cfb9cd29 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -131,14 +131,14 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                               uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    unsigned int shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
+    unsigned long shadow_mb = DIV_ROUNDUP(d_config->b_info.shadow_memkb, 1024);
 
     int r = xc_shadow_control(ctx->xch, domid,
                               XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION,
-                              &shadow_mb, 0);
+                              NULL, 0, &shadow_mb, 0, NULL);
     if (r) {
         LOGED(ERROR, domid,
-              "Failed to set %u MiB shadow allocation", shadow_mb);
+              "Failed to set %lu MiB shadow allocation", shadow_mb);
         return ERROR_FAIL;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:13:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431285.684167 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDwP-0001QL-7a; Fri, 28 Oct 2022 01:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431285.684167; Fri, 28 Oct 2022 01:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDwP-0001QD-4p; Fri, 28 Oct 2022 01:13:45 +0000
Received: by outflank-mailman (input) for mailman id 431285;
 Fri, 28 Oct 2022 01:13:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwO-0001Q0-0S
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwN-0006bj-V5
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwN-00055j-UL
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=WidDeS88n4WfhwX13KWQdwwpoDOjjDxCCtuWcDxf69s=; b=DWHz7vXgWMvKkr01l84I/zQdkh
	Rl510zKX6ibhZcecMkFD3mfTnQ+FD1o4Ts+Uol3v9G3EZ+9N0touU2Sszm83AgtQbUoXvJXfI4Q0l
	xutDg1q0c5b6pmMPfnjO+7z/hN3XYVdRw+B47UqpmEfhyENyjeMTvm/32+VlLd2oOotk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] arm/p2m: Rework p2m_init()
Message-Id: <E1ooDwN-00055j-UL@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:13:43 +0000

commit 3954468f3af2525dbe1031d5711bad8656802d3c
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Tue Oct 25 09:19:36 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 21:09:25 2022 +0100

    arm/p2m: Rework p2m_init()
    
    p2m_init() is mostly trivial initialisation, but has two fallible operations
    which are on either side of the backpointer trigger for teardown to take
    actions.
    
    p2m_free_vmid() is idempotent with a failed p2m_alloc_vmid(), so rearrange
    p2m_init() to perform all trivial setup, then set the backpointer, then
    perform all fallible setup.
    
    This will simplify a future bugfix which needs to add a third fallible
    operation.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: 3783e583319fa1ce75e414d851f0fde191a14753)
---
 xen/arch/arm/p2m.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 3196690544..fa6d0a83e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1698,7 +1698,7 @@ void p2m_final_teardown(struct domain *d)
 int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int rc = 0;
+    int rc;
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
@@ -1707,11 +1707,6 @@ int p2m_init(struct domain *d)
     INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
-
-    rc = p2m_alloc_vmid(d);
-    if ( rc != 0 )
-        return rc;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1727,8 +1722,6 @@ int p2m_init(struct domain *d)
     p2m->clean_pte = is_iommu_enabled(d) &&
         !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    rc = p2m_alloc_table(d);
-
     /*
      * Make sure that the type chosen to is able to store the an vCPU ID
      * between 0 and the maximum of virtual CPUS supported as long as
@@ -1741,13 +1734,20 @@ int p2m_init(struct domain *d)
        p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
 
     /*
-     * Besides getting a domain when we only have the p2m in hand,
-     * the back pointer to domain is also used in p2m_teardown()
-     * as an end-of-initialization indicator.
+     * "Trivial" initialisation is now complete.  Set the backpointer so
+     * p2m_teardown() and friends know to do something.
      */
     p2m->domain = d;
 
-    return rc;
+    rc = p2m_alloc_vmid(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_alloc_table(d);
+    if ( rc )
+        return rc;
+
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 01:13:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 01:13:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431286.684171 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDwZ-0001U6-Ab; Fri, 28 Oct 2022 01:13:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431286.684171; Fri, 28 Oct 2022 01:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooDwZ-0001Te-7q; Fri, 28 Oct 2022 01:13:55 +0000
Received: by outflank-mailman (input) for mailman id 431286;
 Fri, 28 Oct 2022 01:13:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwY-0001TP-2R
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwY-0006bt-1j
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooDwY-00057n-17
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 01:13:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=sthoh9UkZmyYCNawzaEqvOSef/dvBkMEBCcmSAx/lGw=; b=X1CPy71JgjnApyxVG2kwtxjST/
	sy8vig0+z/zCN2l4XDZb+Vz8L9ihuvCZTs7vk1B4ukQ9pUAZs8jZ1PFJOS6XV7GJmag96pp5Epf8B
	7+uEKCLUevckT/WfInEMbP8ZhqZl99SYgmtXdGGVyjuuIN3tpqRr1AtZ8tJvKbHKFppQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.13] xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
Message-Id: <E1ooDwY-00057n-17@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 01:13:54 +0000

commit 5b668634a9feb68e7a27339f25591b019d0923c3
Author:     Henry Wang <Henry.Wang@arm.com>
AuthorDate: Tue Oct 25 09:19:37 2022 +0000
Commit:     Julien Grall <jgrall@amazon.com>
CommitDate: Tue Oct 25 21:09:58 2022 +0100

    xen/arm: p2m: Populate pages for GICv2 mapping in p2m_init()
    
    Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
    when the domain is created. Considering the worst case of page tables
    which requires 6 P2M pages as the two pages will be consecutive but not
    necessarily in the same L3 page table and keep a buffer, populate 16
    pages as the default value to the P2M pages pool in p2m_init() at the
    domain creation stage to satisfy the GICv2 requirement. For GICv3, the
    above-mentioned P2M mapping is not necessary, but since the allocated
    16 pages here would not be lost, hence populate these pages
    unconditionally.
    
    With the default 16 P2M pages populated, there would be a case that
    failures would happen in the domain creation with P2M pages already in
    use. To properly free the P2M for this case, firstly support the
    optionally preemption of p2m_teardown(), then call p2m_teardown() and
    p2m_set_allocation(d, 0, NULL) non-preemptively in p2m_final_teardown().
    As non-preemptive p2m_teardown() should only return 0, use a
    BUG_ON to confirm that.
    
    Since p2m_final_teardown() is called either after
    domain_relinquish_resources() where relinquish_p2m_mapping() has been
    called, or from failure path of domain_create()/arch_domain_create()
    where mappings that require p2m_put_l3_page() should never be created,
    relinquish_p2m_mapping() is not added in p2m_final_teardown(), add
    in-code comments to refer this.
    
    Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
    Suggested-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit: c7cff1188802646eaa38e918e5738da0e84949be)
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 34 ++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/p2m.h | 14 ++++++++++----
 3 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31abe7d6f9..98395173db 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1018,7 +1018,7 @@ int domain_relinquish_resources(struct domain *d)
         /* Fallthrough */
 
     case RELMEM_p2m:
-        ret = p2m_teardown(d);
+        ret = p2m_teardown(d, true);
         if ( ret )
             return ret;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index fa6d0a83e9..ae0c8d23d4 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1629,7 +1629,7 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_teardown(struct domain *d, bool allow_preemption)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
@@ -1637,6 +1637,9 @@ int p2m_teardown(struct domain *d)
     unsigned int i;
     int rc = 0;
 
+    if ( page_list_empty(&p2m->pages) )
+        return 0;
+
     p2m_write_lock(p2m);
 
     /*
@@ -1660,7 +1663,7 @@ int p2m_teardown(struct domain *d)
         p2m_free_page(p2m->domain, pg);
         count++;
         /* Arbitrarily preempt every 512 iterations */
-        if ( !(count % 512) && hypercall_preempt_check() )
+        if ( allow_preemption && !(count % 512) && hypercall_preempt_check() )
         {
             rc = -ERESTART;
             break;
@@ -1680,7 +1683,20 @@ void p2m_final_teardown(struct domain *d)
     if ( !p2m->domain )
         return;
 
+    /*
+     * No need to call relinquish_p2m_mapping() here because
+     * p2m_final_teardown() is called either after domain_relinquish_resources()
+     * where relinquish_p2m_mapping() has been called, or from failure path of
+     * domain_create()/arch_domain_create() where mappings that require
+     * p2m_put_l3_page() should never be created. For the latter case, also see
+     * comment on top of the p2m_set_entry() for more info.
+     */
+
+    BUG_ON(p2m_teardown(d, false));
     ASSERT(page_list_empty(&p2m->pages));
+
+    while ( p2m_teardown_allocation(d) == -ERESTART )
+        continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
     if ( p2m->root )
@@ -1747,6 +1763,20 @@ int p2m_init(struct domain *d)
     if ( rc )
         return rc;
 
+    /*
+     * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area
+     * when the domain is created. Considering the worst case for page
+     * tables and keep a buffer, populate 16 pages to the P2M pages pool here.
+     * For GICv3, the above-mentioned P2M mapping is not necessary, but since
+     * the allocated 16 pages here would not be lost, hence populate these
+     * pages unconditionally.
+     */
+    spin_lock(&d->arch.paging.lock);
+    rc = p2m_set_allocation(d, 16, NULL);
+    spin_unlock(&d->arch.paging.lock);
+    if ( rc )
+        return rc;
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index b1c9b947bb..45d535830f 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -173,14 +173,18 @@ int p2m_init(struct domain *d);
 
 /*
  * The P2M resources are freed in two parts:
- *  - p2m_teardown() will be called when relinquish the resources. It
- *    will free large resources (e.g. intermediate page-tables) that
- *    requires preemption.
+ *  - p2m_teardown() will be called preemptively when relinquish the
+ *    resources, in which case it will free large resources (e.g. intermediate
+ *    page-tables) that requires preemption.
  *  - p2m_final_teardown() will be called when domain struct is been
  *    freed. This *cannot* be preempted and therefore one small
  *    resources should be freed here.
+ *  Note that p2m_final_teardown() will also call p2m_teardown(), to properly
+ *  free the P2M when failures happen in the domain creation with P2M pages
+ *  already in use. In this case p2m_teardown() is called non-preemptively and
+ *  p2m_teardown() will always return 0.
  */
-int p2m_teardown(struct domain *d);
+int p2m_teardown(struct domain *d, bool allow_preemption);
 void p2m_final_teardown(struct domain *d);
 
 /*
@@ -245,6 +249,8 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
 /*
  * Direct set a p2m entry: only for use by the P2M code.
  * The P2M write lock should be taken.
+ * TODO: Add a check in __p2m_set_entry() to avoid creating a mapping in
+ * arch_domain_create() that requires p2m_put_l3_page() to be called.
  */
 int p2m_set_entry(struct p2m_domain *p2m,
                   gfn_t sgfn,
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 04:11:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 04:11:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431395.684255 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGhy-0001hm-ND; Fri, 28 Oct 2022 04:11:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431395.684255; Fri, 28 Oct 2022 04:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGhy-0001he-KG; Fri, 28 Oct 2022 04:11:02 +0000
Received: by outflank-mailman (input) for mailman id 431395;
 Fri, 28 Oct 2022 04:11:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGhy-0001hY-12
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGhy-0001ZD-0J
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGhx-0004vm-VW
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:01 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=06oZ6WwnfJk6Ix8fq00YaOCLG0t6dbXx85mHI3E9cP8=; b=Dyo4L5vVy0DdX8VeGWPVsUdoKF
	A9H3MqQabwozgt1/7/DKZYOcTvFTE4PWledvbI+0ULR0MPq5j9uswsXNp0PtBAkGWknIySSPXgXNB
	CmZGatRllq0X9LD3BrgeRctfG9OjGnSYzK0PT0tKwTZ6Fxhxm9S23sSIlIR6g5GHDO3c=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] vpci: don't assume that vpci per-device data exists unconditionally
Message-Id: <E1ooGhx-0004vm-VW@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 04:11:01 +0000

commit 6ccb5e308ceeb895fbccd87a528a8bd24325aa39
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Wed Oct 26 14:55:30 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 26 14:55:30 2022 +0200

    vpci: don't assume that vpci per-device data exists unconditionally
    
    It's possible for a device to be assigned to a domain but have no
    vpci structure if vpci_process_pending() failed and called
    vpci_remove_device() as a result.  The unconditional accesses done by
    vpci_{read,write}() and vpci_remove_device() to pdev->vpci would
    then trigger a NULL pointer dereference.
    
    Add checks for pdev->vpci presence in the affected functions.
    
    Fixes: 9c244fdef7 ('vpci: add header handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/vpci.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 3467c0de86..647f7af679 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -37,7 +37,7 @@ extern vpci_register_init_t *const __end_vpci_array[];
 
 void vpci_remove_device(struct pci_dev *pdev)
 {
-    if ( !has_vpci(pdev->domain) )
+    if ( !has_vpci(pdev->domain) || !pdev->vpci )
         return;
 
     spin_lock(&pdev->vpci->lock);
@@ -326,7 +326,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
 
     /* Find the PCI dev matching the address. */
     pdev = pci_get_pdev(d, sbdf);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
         return vpci_read_hw(sbdf, reg, size);
 
     spin_lock(&pdev->vpci->lock);
@@ -436,7 +436,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
      * Passthrough everything that's not trapped.
      */
     pdev = pci_get_pdev(d, sbdf);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
     {
         vpci_write_hw(sbdf, reg, size, data);
         return;
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 04:11:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 04:11:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431396.684259 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGi8-0001ju-OY; Fri, 28 Oct 2022 04:11:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431396.684259; Fri, 28 Oct 2022 04:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGi8-0001jl-Ln; Fri, 28 Oct 2022 04:11:12 +0000
Received: by outflank-mailman (input) for mailman id 431396;
 Fri, 28 Oct 2022 04:11:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGi8-0001jW-58
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGi8-0001ZQ-3Z
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGi8-0004wP-2X
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=oHXllh6Ru/iduioPBMT9J292Tv4yZYEZR4ecxB51MNA=; b=aWaXfEWpTE3RGCM/vN6OoQgdS4
	msN6xvtzCIYq6nVyZxEGVfHzwViYf6krRQdQh8ALpUhRPBLIsZ85EU89Wts83WDY4C6vMX1+Kv9wS
	y4OBF495Phvx0CHS7xHWz1ccGdmvh9y98MIJ2ZJdX2Ad2M7JrbwHDP/Cn8c5ffuctkzw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] vpci/msix: remove from table list on detach
Message-Id: <E1ooGi8-0004wP-2X@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 04:11:12 +0000

commit c14aea137eab29eb9c30bfad745a00c65ad21066
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Wed Oct 26 14:56:58 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 26 14:56:58 2022 +0200

    vpci/msix: remove from table list on detach
    
    Teardown of MSIX vPCI related data doesn't currently remove the MSIX
    device data from the list of MSIX tables handled by the domain,
    leading to a use-after-free of the data in the msix structure.
    
    Remove the structure from the list before freeing in order to solve
    it.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Fixes: d6281be9d0 ('vpci/msix: add MSI-X handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/vpci.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 647f7af679..98198dc2c9 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -51,8 +51,12 @@ void vpci_remove_device(struct pci_dev *pdev)
         xfree(r);
     }
     spin_unlock(&pdev->vpci->lock);
-    if ( pdev->vpci->msix && pdev->vpci->msix->pba )
-        iounmap(pdev->vpci->msix->pba);
+    if ( pdev->vpci->msix )
+    {
+        list_del(&pdev->vpci->msix->next);
+        if ( pdev->vpci->msix->pba )
+            iounmap(pdev->vpci->msix->pba);
+    }
     xfree(pdev->vpci->msix);
     xfree(pdev->vpci->msi);
     xfree(pdev->vpci);
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 04:11:22 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 04:11:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431397.684263 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGiI-0001mn-Px; Fri, 28 Oct 2022 04:11:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431397.684263; Fri, 28 Oct 2022 04:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGiI-0001mf-NL; Fri, 28 Oct 2022 04:11:22 +0000
Received: by outflank-mailman (input) for mailman id 431397;
 Fri, 28 Oct 2022 04:11:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGiI-0001mW-7g
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGiI-0001Zf-6p
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGiI-0004xE-5j
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=HnBqmngXEsI4Pscl+UbNCRqoIaA3dDCyHysKezvWdz4=; b=BVddjWg6W8+rjrPcGuNrI6jD30
	NguqD9ZEMUmA38O0DjCJkS6Sb7ajjIlUtUMmDwoIvPrIt4Ryv4KQcJxSrrQQ24RhBd8o6kfaUnI+j
	GFqiKmqsWxOsyL6bm2zZpjrlDXmO5JKMoKazC09o5JhjgWiG63xF/fG2zep6vB3EQoAY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] vpci: introduce a local vpci_bar variable to modify_decoding()
Message-Id: <E1ooGiI-0004xE-5j@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 04:11:22 +0000

commit 26bf76b48bbce3e7b126290374c64966dca47561
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Wed Oct 26 14:57:41 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Wed Oct 26 14:57:41 2022 +0200

    vpci: introduce a local vpci_bar variable to modify_decoding()
    
    This is done to shorten line length in the function in preparation for
    adding further usages of the vpci_bar data structure.
    
    No functional change.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/header.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index a1c928a0d2..eb9219a49a 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -103,24 +103,26 @@ static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
 
     for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
     {
-        if ( !MAPPABLE_BAR(&header->bars[i]) )
+        struct vpci_bar *bar = &header->bars[i];
+
+        if ( !MAPPABLE_BAR(bar) )
             continue;
 
-        if ( rom_only && header->bars[i].type == VPCI_BAR_ROM )
+        if ( rom_only && bar->type == VPCI_BAR_ROM )
         {
             unsigned int rom_pos = (i == PCI_HEADER_NORMAL_NR_BARS)
                                    ? PCI_ROM_ADDRESS : PCI_ROM_ADDRESS1;
-            uint32_t val = header->bars[i].addr |
+            uint32_t val = bar->addr |
                            (map ? PCI_ROM_ADDRESS_ENABLE : 0);
 
-            header->bars[i].enabled = header->rom_enabled = map;
+            bar->enabled = header->rom_enabled = map;
             pci_conf_write32(pdev->sbdf, rom_pos, val);
             return;
         }
 
         if ( !rom_only &&
-             (header->bars[i].type != VPCI_BAR_ROM || header->rom_enabled) )
-            header->bars[i].enabled = map;
+             (bar->type != VPCI_BAR_ROM || header->rom_enabled) )
+            bar->enabled = map;
     }
 
     if ( !rom_only )
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 04:11:32 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 04:11:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431398.684267 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGiS-0001qB-Rk; Fri, 28 Oct 2022 04:11:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431398.684267; Fri, 28 Oct 2022 04:11:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooGiS-0001q3-P0; Fri, 28 Oct 2022 04:11:32 +0000
Received: by outflank-mailman (input) for mailman id 431398;
 Fri, 28 Oct 2022 04:11:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGiS-0001pv-Aa
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGiS-0001a9-9o
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooGiS-0004xp-92
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 04:11:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=63inOc0cwSZ50ndB7tssITW5SSw7pCd8j4wtaT427L0=; b=zVwY9fkTZRettZ624gpCKWs/RJ
	rCUv6ycIhPFAw9cBbAN0at3sGR4sTQW7T7pk9henvskDwKjc9QpWjLo2ErhwfFmGz6F2fBsvOFGkY
	5mkXglFKXn8XohokaJpBrMOsFIP+ukDqHNR18lXfnmInpqGdomvc7GMVf/Ylyjnajx2Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] CI: Drop more TravisCI remnants
Message-Id: <E1ooGiS-0004xp-92@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 04:11:32 +0000

commit bad4832710c7261fad1abe2d0e8e2e1d259b3e8d
Author:     Andrew Cooper <andrew.cooper3@citrix.com>
AuthorDate: Wed Oct 26 13:39:06 2022 +0100
Commit:     Stefano Stabellini <stefano.stabellini@amd.com>
CommitDate: Wed Oct 26 12:18:54 2022 -0700

    CI: Drop more TravisCI remnants
    
    This was missed from previous attempts to remove Travis.
    
    Fixes: e0dc9b095e7c ("CI: Drop TravisCI")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 MAINTAINERS          |  1 -
 scripts/travis-build | 32 --------------------------------
 2 files changed, 33 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 816656950a..175f10f33f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -274,7 +274,6 @@ W:	https://gitlab.com/xen-project/xen
 S:	Supported
 F:	.gitlab-ci.yml
 F:	automation/
-F:	scripts/travis-build
 
 CPU POOLS
 M:	Juergen Gross <jgross@suse.com>
diff --git a/scripts/travis-build b/scripts/travis-build
deleted file mode 100755
index 84d74266a0..0000000000
--- a/scripts/travis-build
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash -ex
-
-$CC --version
-
-# random config or default config
-if [[ "${RANDCONFIG}" == "y" ]]; then
-    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
-else
-    make -C xen defconfig
-fi
-
-# build up our configure options
-cfgargs=()
-cfgargs+=("--disable-stubdom") # more work needed into building this
-cfgargs+=("--disable-rombios")
-cfgargs+=("--enable-docs")
-cfgargs+=("--with-system-seabios=/usr/share/seabios/bios.bin")
-
-# Qemu requires Python 3.5 or later
-if ! type python3 || python3 -c "import sys; res = sys.version_info < (3, 5); exit(not(res))"; then
-    cfgargs+=("--with-system-qemu=/bin/false")
-fi
-
-if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
-    cfgargs+=("--enable-tools")
-else
-    cfgargs+=("--disable-tools") # we don't have the cross depends installed
-fi
-
-./configure "${cfgargs[@]}"
-
-make dist
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 09:44:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 09:44:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431856.684480 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooLuH-0002R7-CG; Fri, 28 Oct 2022 09:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431856.684480; Fri, 28 Oct 2022 09:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooLuH-0002Qz-98; Fri, 28 Oct 2022 09:44:05 +0000
Received: by outflank-mailman (input) for mailman id 431856;
 Fri, 28 Oct 2022 09:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLuG-0002Qt-9D
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLuG-0008AJ-5p
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLuG-000543-4Q
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=xNCPEB0/PwuCKwB8/VYSVzee1A78EZtsuyeB5slWF3M=; b=cz2c9dZauWmIieZ3QpP0rMF9A+
	xXLStCvdayKGEOKiedCsDVcrU9y3RoNv+Ewgw8y+eGmcDW/DWLKIF1sxKF2X5kBQUMMynwRFb8P/b
	yJclvGIBk3oH2FFRVRnJ4MF5He+NQJu82KvaoJ1Ru6jVxrj7aI3mDRWFqpQUFGs6Z3Y0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] common: map_vcpu_info() wants to unshare the underlying page
Message-Id: <E1ooLuG-000543-4Q@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 09:44:04 +0000

commit 48980cf24d5cf41fd644600f99c753419505e735
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Oct 28 11:38:32 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 11:38:32 2022 +0200

    common: map_vcpu_info() wants to unshare the underlying page
    
    Not passing P2M_UNSHARE to get_page_from_gfn() means there won't even be
    an attempt to unshare the referenced page, without any indication to the
    caller (e.g. -EAGAIN). Note that guests have no direct control over
    which of their pages are shared (or paged out), and hence they have no
    way to make sure all on their own that the subsequent obtaining of a
    writable type reference can actually succeed.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 8dd6cd5a8f..53f7e734fe 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1484,7 +1484,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset)
     if ( (v != current) && !(v->pause_flags & VPF_down) )
         return -EINVAL;
 
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    page = get_page_from_gfn(d, gfn, NULL, P2M_UNSHARE);
     if ( !page )
         return -EINVAL;
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 09:44:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 09:44:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431857.684483 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooLuR-0002TH-DK; Fri, 28 Oct 2022 09:44:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431857.684483; Fri, 28 Oct 2022 09:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooLuR-0002T9-Ag; Fri, 28 Oct 2022 09:44:15 +0000
Received: by outflank-mailman (input) for mailman id 431857;
 Fri, 28 Oct 2022 09:44:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLuQ-0002Sz-AR
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLuQ-0008AO-9a
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLuQ-00054b-8C
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=GEzOZJUHrNFx74hhuNPfkOsRFG2ZwYNp0OKNs5gC6fI=; b=icKAbLcD8qEaAS3vNW0lXksXUb
	9ZkxQXAFp/nkDDYXKPnviq9Bzr/aES/Vk9ArMgKsF/3C7aKfWyHvxZf3LRWJsjxGI0ZYyaPQktYUw
	hiv+wjG6I5jHxx/YUnB8rwhm3EpGSkkrrNNg/m74hItfBJfBbW1RPVDV/TRUavO+Baug=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] pci: do not disable memory decoding for devices
Message-Id: <E1ooLuQ-00054b-8C@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 09:44:14 +0000

commit 53d9133638c3f940a53df60352fabb0963d67ad3
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Fri Oct 28 11:40:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 11:40:00 2022 +0200

    pci: do not disable memory decoding for devices
    
    Commit 75cc460a1b added checks to ensure the position of the BARs from
    PCI devices don't overlap with regions defined on the memory map.
    When there's a collision memory decoding is left disabled for the
    device, assuming that dom0 will reposition the BAR if necessary and
    enable memory decoding.
    
    While this would be the case for devices being used by dom0, devices
    being used by the firmware itself that have no driver would usually be
    left with memory decoding disabled by dom0 if that's the state dom0
    found them in, and thus firmware trying to make use of them will not
    function correctly.
    
    The initial intent of 75cc460a1b was to prevent vPCI from creating
    MMIO mappings on the dom0 p2m over regions that would otherwise
    already have mappings established.  It's my view now that we likely
    went too far with 75cc460a1b, and Xen disabling memory decoding of
    devices (as buggy as they might be) is harmful, and reduces the set of
    hardware on which Xen works.
    
    This commits reverts most of 75cc460a1b, and instead adds checks to
    vPCI in order to prevent misplaced BARs from being added to the
    hardware domain p2m.  Signaling on whether BARs are mapped is tracked
    in the vpci structure, so that misplaced BARs are not mapped, and thus
    Xen won't attempt to unmap them when memory decoding is disabled.
    
    This restores the behavior of Xen for PV dom0 to the state it was
    previous to 75cc460a1b, while also introducing a more contained fix
    for the vPCI BAR mapping issues.
    
    Fixes: 75cc460a1b ('xen/pci: detect when BARs are not suitably positioned')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/passthrough/pci.c | 69 -------------------------------------------
 xen/drivers/vpci/header.c     | 21 +++++++++++--
 2 files changed, 18 insertions(+), 72 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 149f68bb6e..b42acb8d7c 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -233,9 +233,6 @@ static void check_pdev(const struct pci_dev *pdev)
      PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_REC_MASTER_ABORT | \
      PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_DETECTED_PARITY)
     u16 val;
-    unsigned int nbars = 0, rom_pos = 0, i;
-    static const char warn[] = XENLOG_WARNING
-        "%pp disabled: %sBAR [%#lx, %#lx] overlaps with memory map\n";
 
     if ( command_mask )
     {
@@ -254,8 +251,6 @@ static void check_pdev(const struct pci_dev *pdev)
     switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f )
     {
     case PCI_HEADER_TYPE_BRIDGE:
-        nbars = PCI_HEADER_BRIDGE_NR_BARS;
-        rom_pos = PCI_ROM_ADDRESS1;
         if ( !bridge_ctl_mask )
             break;
         val = pci_conf_read16(pdev->sbdf, PCI_BRIDGE_CONTROL);
@@ -272,75 +267,11 @@ static void check_pdev(const struct pci_dev *pdev)
         }
         break;
 
-    case PCI_HEADER_TYPE_NORMAL:
-        nbars = PCI_HEADER_NORMAL_NR_BARS;
-        rom_pos = PCI_ROM_ADDRESS;
-        break;
-
     case PCI_HEADER_TYPE_CARDBUS:
         /* TODO */
         break;
     }
 #undef PCI_STATUS_CHECK
-
-    /* Check if BARs overlap with other memory regions. */
-    val = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
-    if ( !(val & PCI_COMMAND_MEMORY) || pdev->ignore_bars )
-        return;
-
-    pci_conf_write16(pdev->sbdf, PCI_COMMAND, val & ~PCI_COMMAND_MEMORY);
-    for ( i = 0; i < nbars; )
-    {
-        uint64_t addr, size;
-        unsigned int reg = PCI_BASE_ADDRESS_0 + i * 4;
-        int rc = 1;
-
-        if ( (pci_conf_read32(pdev->sbdf, reg) & PCI_BASE_ADDRESS_SPACE) !=
-             PCI_BASE_ADDRESS_SPACE_MEMORY )
-            goto next;
-
-        rc = pci_size_mem_bar(pdev->sbdf, reg, &addr, &size,
-                              (i == nbars - 1) ? PCI_BAR_LAST : 0);
-        if ( rc < 0 )
-            /* Unable to size, better leave memory decoding disabled. */
-            return;
-        if ( size && !pci_check_bar(pdev, maddr_to_mfn(addr),
-                                    maddr_to_mfn(addr + size - 1)) )
-        {
-            /*
-             * Return without enabling memory decoding if BAR position is not
-             * in IO suitable memory. Let the hardware domain re-position the
-             * BAR.
-             */
-            printk(warn,
-                   &pdev->sbdf, "", PFN_DOWN(addr), PFN_DOWN(addr + size - 1));
-            return;
-        }
-
- next:
-        ASSERT(rc > 0);
-        i += rc;
-    }
-
-    if ( rom_pos &&
-         (pci_conf_read32(pdev->sbdf, rom_pos) & PCI_ROM_ADDRESS_ENABLE) )
-    {
-        uint64_t addr, size;
-        int rc = pci_size_mem_bar(pdev->sbdf, rom_pos, &addr, &size,
-                                  PCI_BAR_ROM);
-
-        if ( rc < 0 )
-            return;
-        if ( size && !pci_check_bar(pdev, maddr_to_mfn(addr),
-                                    maddr_to_mfn(addr + size - 1)) )
-        {
-            printk(warn, &pdev->sbdf, "ROM ", PFN_DOWN(addr),
-                   PFN_DOWN(addr + size - 1));
-            return;
-        }
-    }
-
-    pci_conf_write16(pdev->sbdf, PCI_COMMAND, val);
 }
 
 static void apply_quirks(struct pci_dev *pdev)
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index eb9219a49a..d272b3f343 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -115,13 +115,18 @@ static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
             uint32_t val = bar->addr |
                            (map ? PCI_ROM_ADDRESS_ENABLE : 0);
 
-            bar->enabled = header->rom_enabled = map;
+            if ( pci_check_bar(pdev, _mfn(PFN_DOWN(bar->addr)),
+                               _mfn(PFN_DOWN(bar->addr + bar->size - 1))) )
+                bar->enabled = map;
+            header->rom_enabled = map;
             pci_conf_write32(pdev->sbdf, rom_pos, val);
             return;
         }
 
         if ( !rom_only &&
-             (bar->type != VPCI_BAR_ROM || header->rom_enabled) )
+             (bar->type != VPCI_BAR_ROM || header->rom_enabled) &&
+             pci_check_bar(pdev, _mfn(PFN_DOWN(bar->addr)),
+                           _mfn(PFN_DOWN(bar->addr + bar->size - 1))) )
             bar->enabled = map;
     }
 
@@ -234,9 +239,19 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
 
         if ( !MAPPABLE_BAR(bar) ||
              (rom_only ? bar->type != VPCI_BAR_ROM
-                       : (bar->type == VPCI_BAR_ROM && !header->rom_enabled)) )
+                       : (bar->type == VPCI_BAR_ROM && !header->rom_enabled)) ||
+             /* Skip BARs already in the requested state. */
+             bar->enabled == !!(cmd & PCI_COMMAND_MEMORY) )
             continue;
 
+        if ( !pci_check_bar(pdev, _mfn(start), _mfn(end)) )
+        {
+            printk(XENLOG_G_WARNING
+                   "%pp: not mapping BAR [%lx, %lx] invalid position\n",
+                   &pdev->sbdf, start, end);
+            continue;
+        }
+
         rc = rangeset_add_range(mem, start, end);
         if ( rc )
         {
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 09:44:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 09:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.431858.684488 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooLub-0002Ws-G5; Fri, 28 Oct 2022 09:44:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 431858.684488; Fri, 28 Oct 2022 09:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooLub-0002Wl-Da; Fri, 28 Oct 2022 09:44:25 +0000
Received: by outflank-mailman (input) for mailman id 431858;
 Fri, 28 Oct 2022 09:44:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLua-0002WX-DZ
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLua-0008Af-Cj
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooLua-000556-Bs
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 09:44:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Ar/geZcwxXpxwg08S9t0Igul5bw6nowu4TzwxWrp6BE=; b=D2A46Vl1F/HgNorK/q/cyJi2c4
	XyzKLvm++AGwUfLsghtWe65kUQLQB/sUegj0FEUvWWLveEM+V6Gcc+ZQn/3N5d847FbW3bUI7Ls40
	5HxN9zDT5dp/+crAbBV4sjWH2eaVIewe80tDeWR82GAKmaOalmi1bMeXhvwSWs8O+HWE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] vpci: refuse BAR writes only if the BAR is mapped
Message-Id: <E1ooLua-000556-Bs@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 09:44:24 +0000

commit 7abd7bc1626d25ada03c1cff2e8c2ce1a5cc3cbf
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Fri Oct 28 11:40:45 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 11:40:45 2022 +0200

    vpci: refuse BAR writes only if the BAR is mapped
    
    Writes to the BARs are ignored if memory decoding is enabled for the
    device, and the same happen with ROM BARs if the write is an attempt
    to change the position of the BAR without disabling it first.
    
    The reason of ignoring such writes is a limitation in Xen, as it would
    need to unmap the BAR, change the address, and remap the BAR at the
    new position, which the current logic doesn't support.
    
    Some devices however seem to (wrongly) have the memory decoding bit
    hardcoded to enabled, and attempts to disable it don't get reflected
    on the command register.
    
    This causes issues for well behaved domains that disable memory
    decoding and then try to size the BARs, as vPCI will think memory
    decoding is still enabled and ignore the write.
    
    Since vPCI doesn't explicitly care about whether the memory decoding
    bit is disabled as long as the BAR is not mapped in the domain p2m use
    the information in the vpci_bar to check whether the BAR is mapped,
    and refuse writes only based on that information.  This workarounds
    the issue, and allows domains to size and reposition the BARs properly.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/header.c | 31 +++++++++++++++++++++----------
 xen/include/xen/vpci.h    |  6 ++++++
 2 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index d272b3f343..ec2e978a4e 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -131,7 +131,10 @@ static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
     }
 
     if ( !rom_only )
+    {
         pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
+        header->bars_mapped = map;
+    }
     else
         ASSERT_UNREACHABLE();
 }
@@ -352,13 +355,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
 static void cf_check cmd_write(
     const struct pci_dev *pdev, unsigned int reg, uint32_t cmd, void *data)
 {
-    uint16_t current_cmd = pci_conf_read16(pdev->sbdf, reg);
+    struct vpci_header *header = data;
 
     /*
      * Let Dom0 play with all the bits directly except for the memory
      * decoding one.
      */
-    if ( (cmd ^ current_cmd) & PCI_COMMAND_MEMORY )
+    if ( header->bars_mapped != !!(cmd & PCI_COMMAND_MEMORY) )
         /*
          * Ignore the error. No memory has been added or removed from the p2m
          * (because the actual p2m changes are deferred in defer_map) and the
@@ -385,12 +388,16 @@ static void cf_check bar_write(
     else
         val &= PCI_BASE_ADDRESS_MEM_MASK;
 
-    if ( pci_conf_read16(pdev->sbdf, PCI_COMMAND) & PCI_COMMAND_MEMORY )
+    /*
+     * Xen only cares whether the BAR is mapped into the p2m, so allow BAR
+     * writes as long as the BAR is not mapped into the p2m.
+     */
+    if ( bar->enabled )
     {
         /* If the value written is the current one avoid printing a warning. */
         if ( val != (uint32_t)(bar->addr >> (hi ? 32 : 0)) )
             gprintk(XENLOG_WARNING,
-                    "%pp: ignored BAR %zu write with memory decoding enabled\n",
+                    "%pp: ignored BAR %zu write while mapped\n",
                     &pdev->sbdf, bar - pdev->vpci->header.bars + hi);
         return;
     }
@@ -419,25 +426,29 @@ static void cf_check rom_write(
 {
     struct vpci_header *header = &pdev->vpci->header;
     struct vpci_bar *rom = data;
-    uint16_t cmd = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
     bool new_enabled = val & PCI_ROM_ADDRESS_ENABLE;
 
-    if ( (cmd & PCI_COMMAND_MEMORY) && header->rom_enabled && new_enabled )
+    /*
+     * See comment in bar_write(). Additionally since the ROM BAR has an enable
+     * bit some writes are allowed while the BAR is mapped, as long as the
+     * write is to unmap the ROM BAR.
+     */
+    if ( rom->enabled && new_enabled )
     {
         gprintk(XENLOG_WARNING,
-                "%pp: ignored ROM BAR write with memory decoding enabled\n",
+                "%pp: ignored ROM BAR write while mapped\n",
                 &pdev->sbdf);
         return;
     }
 
-    if ( !header->rom_enabled )
+    if ( !rom->enabled )
         /*
-         * If the ROM BAR is not enabled update the address field so the
+         * If the ROM BAR is not mapped update the address field so the
          * correct address is mapped into the p2m.
          */
         rom->addr = val & PCI_ROM_ADDRESS_MASK;
 
-    if ( !(cmd & PCI_COMMAND_MEMORY) || header->rom_enabled == new_enabled )
+    if ( !header->bars_mapped || rom->enabled == new_enabled )
     {
         /* Just update the ROM BAR field. */
         header->rom_enabled = new_enabled;
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index 67c9a0c631..d8acfeba8a 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -88,6 +88,12 @@ struct vpci {
          * is mapped into guest p2m) if there's a ROM BAR on the device.
          */
         bool rom_enabled      : 1;
+        /*
+         * Cache whether memory decoding is enabled from our PoV.
+         * Some devices have a sticky memory decoding so that can't be relied
+         * upon to know whether BARs are mapped into the guest p2m.
+         */
+        bool bars_mapped      : 1;
         /* FIXME: currently there's no support for SR-IOV. */
     } header;
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 13:55:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 13:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432088.684789 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooPpB-0006St-Es; Fri, 28 Oct 2022 13:55:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432088.684789; Fri, 28 Oct 2022 13:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooPpB-0006SI-AQ; Fri, 28 Oct 2022 13:55:05 +0000
Received: by outflank-mailman (input) for mailman id 432088;
 Fri, 28 Oct 2022 13:55:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpA-0006PK-RM
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpA-00042K-P8
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpA-0008N0-O6
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=yLnbKpIhkUKNvwR8kAWH4KFTguFGdJE7zu4BI5MbRXQ=; b=LJKzERBDREcLG2GYgZQDg2wyLs
	8forxNjNt64gV1wdeGwaYFkT6LsvxUteNKbHuqyl/vY6jmz74uw9OIlLfIuV2WhRXPOZwleXuE8LT
	RmcyYfMGuDczTq/X6GTaUBJMyI66HyzQNDSK0hA+84jKl39cXHINbRq+eYkR6vWXylE0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/pv-shim: correctly ignore empty onlining requests
Message-Id: <E1ooPpA-0008N0-O6@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 13:55:04 +0000

commit 9272225ca72801fd9fa5b268a2d1c5adebd19cd9
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Fri Oct 28 15:47:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 15:47:59 2022 +0200

    x86/pv-shim: correctly ignore empty onlining requests
    
    Mem-op requests may have zero extents. Such requests need treating as
    no-ops. pv_shim_online_memory(), however, would have tried to take 2³²-1
    order-sized pages from its balloon list (to then populate them),
    typically ending when the entire set of ballooned pages of this order
    was consumed.
    
    Note that pv_shim_offline_memory() does not have such an issue.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/pv/shim.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 49ce4f93f2..ae1a0e6e65 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -944,6 +944,9 @@ void pv_shim_online_memory(unsigned int nr, unsigned int order)
     struct page_info *page, *tmp;
     PAGE_LIST_HEAD(list);
 
+    if ( !nr )
+        return;
+
     spin_lock(&balloon_lock);
     page_list_for_each_safe ( page, tmp, &balloon )
     {
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 13:55:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 13:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432089.684793 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooPpL-0006ih-Es; Fri, 28 Oct 2022 13:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432089.684793; Fri, 28 Oct 2022 13:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooPpL-0006iZ-C8; Fri, 28 Oct 2022 13:55:15 +0000
Received: by outflank-mailman (input) for mailman id 432089;
 Fri, 28 Oct 2022 13:55:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpK-0006iO-Tj
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpK-00042Q-Sw
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpK-0008NY-RK
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=c6rOn4wQv6c6CIBn7h1PGI4epiGvVhZyj9wCozfC+Nk=; b=t6R9Cb8rBvK26oXWxggOsee8bm
	dytTMY2nQuSzeFwSkHDff8aqKLkj3F8ZQQ3NQfxxQdwfiPlBlU/S1E+p+MfAqMm5dg+aYmCZBCga6
	3HqeSffUyAD063O6GMJEepbExdp2BitojNocw/+nEnSShOWaRFWXwh3KTV+jXhiUMX+Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/pv-shim: correct ballooning up for compat guests
Message-Id: <E1ooPpK-0008NY-RK@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 13:55:14 +0000

commit a0bfdd201ea12aa5679bb8944d63a4e0d3c23160
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Fri Oct 28 15:48:50 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 15:48:50 2022 +0200

    x86/pv-shim: correct ballooning up for compat guests
    
    From: Igor Druzhinin <igor.druzhinin@citrix.com>
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. It's only that case when the function would
    issue a call to pv_shim_online_memory(), yet the range then covers only
    the first sub-range that results from the split.
    
    Address that breakage by making a complementary call to
    pv_shim_online_memory() in compat layer.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/compat/memory.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 56c7de1dea..8ca63ceda6 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -7,6 +7,7 @@ EMIT_FILE;
 #include <xen/event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
+#include <asm/guest.h>
 #include <compat/memory.h>
 
 #define xen_domid_t domid_t
@@ -146,7 +147,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 nat.rsrv->nr_extents = end_extent;
                 ++split;
             }
-
+           /* Avoid calling pv_shim_online_memory() when in a continuation. */
+           if ( pv_shim && op != XENMEM_decrease_reservation && !start_extent )
+               pv_shim_online_memory(cmp.rsrv.nr_extents - nat.rsrv->nr_extents,
+                                     cmp.rsrv.extent_order);
             break;
 
         case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 13:55:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 13:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432090.684796 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooPpW-0006lo-GE; Fri, 28 Oct 2022 13:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432090.684796; Fri, 28 Oct 2022 13:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooPpW-0006lh-Dg; Fri, 28 Oct 2022 13:55:26 +0000
Received: by outflank-mailman (input) for mailman id 432090;
 Fri, 28 Oct 2022 13:55:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpV-0006lO-0U
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpU-00042g-Vv
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooPpU-0008O7-VC
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 13:55:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+TTEytvULIk/T2ld2Jz7ls67clb8w7+vqwCm70+Jz+E=; b=2Zp8R+8ebjJMOo+yl1Q2/nr5gY
	0osWAlJJ1a9l3hpA7qjYDsolXAXHq26v1X7PajmgQZENbkj9ENW2J4OsUsfbI3PQzI1IKfN+bM9LK
	Qs1DzB0Nkg9Y49Gz3GX+/JE5BGfEnh6jBkP4jQDmK3qdvHfmoJ0Q+zWVkUHV3DShs4oM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging] x86/pv-shim: correct ballooning down for compat guests
Message-Id: <E1ooPpU-0008O7-VC@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 13:55:24 +0000

commit 1d7fbc535d1d37bdc2cc53ede360b0f6651f7de1
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Fri Oct 28 15:49:33 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 15:49:33 2022 +0200

    x86/pv-shim: correct ballooning down for compat guests
    
    From: Igor Druzhinin <igor.druzhinin@citrix.com>
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. In order to be usable as overall result, the
    function accumulates args.nr_done, i.e. it initialized the field with
    the start extent. Therefore non-initial requests resulting from the
    split would pass too large a number into pv_shim_offline_memory().
    
    Address that breakage by always calling pv_shim_offline_memory()
    regardless of current hypercall preemption status, with a suitably
    adjusted first argument. Note that this is correct also for the native
    guest case: We now simply "commit" what was completed right away, rather
    than at the end of a series of preemption/re-start cycles. In fact this
    improves overall preemption behavior: There's no longer a potentially
    big chunk of work done non-preemptively at the end of the last
    "iteration".
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/memory.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index ae8163a738..a15e5580f3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1461,22 +1461,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         rc = args.nr_done;
 
-        if ( args.preempted )
-            return hypercall_create_continuation(
-                __HYPERVISOR_memory_op, "lh",
-                op | (rc << MEMOP_EXTENT_SHIFT), arg);
-
 #ifdef CONFIG_X86
         if ( pv_shim && op == XENMEM_decrease_reservation )
-            /*
-             * Only call pv_shim_offline_memory when the hypercall has
-             * finished. Note that nr_done is used to cope in case the
-             * hypercall has failed and only part of the extents where
-             * processed.
-             */
-            pv_shim_offline_memory(args.nr_done, args.extent_order);
+            pv_shim_offline_memory(args.nr_done - start_extent,
+                                   args.extent_order);
 #endif
 
+        if ( args.preempted )
+           return hypercall_create_continuation(
+                __HYPERVISOR_memory_op, "lh",
+                op | (rc << MEMOP_EXTENT_SHIFT), arg);
+
         break;
 
     case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#staging


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 19:55:11 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 19:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432221.684988 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooVRX-0001C4-CU; Fri, 28 Oct 2022 19:55:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432221.684988; Fri, 28 Oct 2022 19:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooVRX-0001Bw-9b; Fri, 28 Oct 2022 19:55:03 +0000
Received: by outflank-mailman (input) for mailman id 432221;
 Fri, 28 Oct 2022 19:55:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRW-0001Bq-Ev
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRW-00029f-Ce
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRW-00043T-Ao
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=CgTRt2QqUqetdr3Ztp2PmNVdScWpw8K/0QoC2VTEq0Q=; b=rdPMOu1HU5ESWmI+1YjWyFJziS
	FMWi5xMU4uZgOitCCoIVYrKnl4muU1c+Gp76vhOpZhxlHoPuD96AAgdTGjcxdvcuV8QZVO2s5XwA/
	EvxxBDVgwQD7csU/GIPBtEUHMD5WyS9a+N0xggh7zpe1B06O8TsDerKI4hCExElm0n4g=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] vpci: include xen/vmap.h to fix build on ARM
Message-Id: <E1ooVRW-00043T-Ao@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 19:55:02 +0000

commit 2ca833688abd4ce88f8eba06ee98c08d35d2d486
Author:     Volodymyr Babchuk <volodymyr_babchuk@epam.com>
AuthorDate: Thu Oct 27 11:48:36 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 27 11:48:36 2022 +0200

    vpci: include xen/vmap.h to fix build on ARM
    
    Patch b4f211606011 ("vpci/msix: fix PBA accesses") introduced call to
    iounmap(), but not added corresponding include.
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/vpci.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 98198dc2c9..6d48d496bb 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -19,6 +19,7 @@
 
 #include <xen/sched.h>
 #include <xen/vpci.h>
+#include <xen/vmap.h>
 
 /* Internal struct to store the emulated PCI registers. */
 struct vpci_register {
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 19:55:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 19:55:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432222.684991 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooVRh-0001Vw-Dd; Fri, 28 Oct 2022 19:55:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432222.684991; Fri, 28 Oct 2022 19:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooVRh-0001Vp-BA; Fri, 28 Oct 2022 19:55:13 +0000
Received: by outflank-mailman (input) for mailman id 432222;
 Fri, 28 Oct 2022 19:55:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRg-0001Vf-IO
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRg-00029k-Hf
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRg-00043y-FB
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=RBUBG1uAlJ7PpsidRukzXSFifVXVY4krfiR2y35tXa4=; b=tELKnnTXVSZnPuUU82DGHbGRg2
	pcB3p34wtIRGtpnuGmUfj36f6ON+aohXpVZ0QtV+2KvHkIFlnc1cK8CLM2ZNtoAT4GuDJJo6G7nNi
	bjWZFi83VYHHCPtOitgvxeK4RhwSoeIWj6r4JuodZJrrRSdPMldb9Do5fagLieF7h15w=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86: also zap secondary time area handles during soft reset
Message-Id: <E1ooVRg-00043y-FB@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 19:55:12 +0000

commit b80d4f8d2ea6418e32fb4f20d1304ace6d6566e3
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Thu Oct 27 11:49:09 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 27 11:49:09 2022 +0200

    x86: also zap secondary time area handles during soft reset
    
    Just like domain_soft_reset() properly zaps runstate area handles, the
    secondary time area ones also need discarding to prevent guest memory
    corruption once the guest is re-started.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/domain.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a5d2d66852..ce82c502bb 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -951,6 +951,7 @@ int arch_domain_soft_reset(struct domain *d)
     struct page_info *page = virt_to_page(d->shared_info), *new_page;
     int ret = 0;
     struct domain *owner;
+    struct vcpu *v;
     mfn_t mfn;
     gfn_t gfn;
     p2m_type_t p2mt;
@@ -1030,7 +1031,12 @@ int arch_domain_soft_reset(struct domain *d)
                "Failed to add a page to replace %pd's shared_info frame %"PRI_gfn"\n",
                d, gfn_x(gfn));
         free_domheap_page(new_page);
+        goto exit_put_gfn;
     }
+
+    for_each_vcpu ( d, v )
+        set_xen_guest_handle(v->arch.time_info_guest, NULL);
+
  exit_put_gfn:
     put_gfn(d, gfn_x(gfn));
  exit_put_page:
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Fri Oct 28 19:55:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Oct 2022 19:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432223.684996 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooVRr-0001Ya-FJ; Fri, 28 Oct 2022 19:55:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432223.684996; Fri, 28 Oct 2022 19:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1ooVRr-0001YS-Ca; Fri, 28 Oct 2022 19:55:23 +0000
Received: by outflank-mailman (input) for mailman id 432223;
 Fri, 28 Oct 2022 19:55:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRq-0001YK-LH
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRq-0002A2-KW
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1ooVRq-00044S-Jl
 for xen-changelog@lists.xenproject.org; Fri, 28 Oct 2022 19:55:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=gcRMedRjt/DEsU27KqUzTYDjefPHe3a+0GKzDt+lhDc=; b=5M96MMK2cfssMCk8K0+W3Hrg4h
	FMHaJNFz4gipVhz6r1iv/1qb/PqvMtJkrbfPD4/E4CMuGK+eY8MHmvFlDGAeN7lf9rgghxUJAJGph
	EVxquik3XI1aianKav3JciUXUcqHk7+eWgmVEPzWwUPDr3nvI9Yj5hpaCj/t06MEgcZM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] Arm32: prune (again) ld warning about mismatched wchar_t sizes
Message-Id: <E1ooVRq-00044S-Jl@xenbits.xenproject.org>
Date: Fri, 28 Oct 2022 19:55:22 +0000

commit 20cf0ab774e828dc4e75ecebecf56b53aca754fc
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Thu Oct 27 11:50:47 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Thu Oct 27 11:50:47 2022 +0200

    Arm32: prune (again) ld warning about mismatched wchar_t sizes
    
    The name change (stub.c -> common-stub.c) rendered the earlier
    workaround (commit a4d4c541f58b ["xen/arm32: avoid EFI stub wchar_t size
    linker warning"]) ineffectual.
    
    Fixes: bfd3e9945d1b ("build: fix x86 out-of-tree build without EFI")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/efi/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/efi/Makefile b/xen/arch/arm/efi/Makefile
index 2459cbae3a..74b7274bdd 100644
--- a/xen/arch/arm/efi/Makefile
+++ b/xen/arch/arm/efi/Makefile
@@ -6,6 +6,6 @@ obj-$(CONFIG_ACPI) +=  efi-dom0.init.o
 else
 obj-y += common-stub.o
 
-$(obj)/stub.o: CFLAGS-y += -fno-short-wchar
+$(obj)/common-stub.o: CFLAGS-y += -fno-short-wchar
 
 endif
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 29 06:33:08 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Oct 2022 06:33:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432285.685077 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofOx-0005iE-5U; Sat, 29 Oct 2022 06:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432285.685077; Sat, 29 Oct 2022 06:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofOx-0005i4-24; Sat, 29 Oct 2022 06:33:03 +0000
Received: by outflank-mailman (input) for mailman id 432285;
 Sat, 29 Oct 2022 06:33:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofOv-0005hy-Nh
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofOv-00045e-Mv
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:01 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofOv-000610-Ly
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:01 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=HDZRHFzBMf90y61D/yfdNoZvd7+B3S7UZMByZOQEGy4=; b=nysdFmappbqmsbkOYF7jNYx2FS
	DOv3gaTVtQHktxAzyt/9oARbD0dMqTHGShnd956CsuNpFI09lCa/KDdRquGtZiRh7MfeYyXzEmF12
	OiK/RfouAawf3+vgonqWms8mHB+UjFVjGahhjIuG/LGHa8it7WIgx8+cm8QFyFUoGYx8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] common: map_vcpu_info() wants to unshare the underlying page
Message-Id: <E1oofOv-000610-Ly@xenbits.xenproject.org>
Date: Sat, 29 Oct 2022 06:33:01 +0000

commit 48980cf24d5cf41fd644600f99c753419505e735
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Oct 28 11:38:32 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 11:38:32 2022 +0200

    common: map_vcpu_info() wants to unshare the underlying page
    
    Not passing P2M_UNSHARE to get_page_from_gfn() means there won't even be
    an attempt to unshare the referenced page, without any indication to the
    caller (e.g. -EAGAIN). Note that guests have no direct control over
    which of their pages are shared (or paged out), and hence they have no
    way to make sure all on their own that the subsequent obtaining of a
    writable type reference can actually succeed.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 8dd6cd5a8f..53f7e734fe 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1484,7 +1484,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset)
     if ( (v != current) && !(v->pause_flags & VPF_down) )
         return -EINVAL;
 
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    page = get_page_from_gfn(d, gfn, NULL, P2M_UNSHARE);
     if ( !page )
         return -EINVAL;
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 29 06:33:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Oct 2022 06:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432286.685080 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofP7-0005kO-6a; Sat, 29 Oct 2022 06:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432286.685080; Sat, 29 Oct 2022 06:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofP7-0005kG-3t; Sat, 29 Oct 2022 06:33:13 +0000
Received: by outflank-mailman (input) for mailman id 432286;
 Sat, 29 Oct 2022 06:33:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofP5-0005k4-Rd
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofP5-00045i-Qs
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:11 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofP5-00061P-PD
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:11 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=b1SAIguNy5tGGefTQSqVwkwfNpdqzMpmiIPEwGefglE=; b=aPQZtRgwACSC9J84dPiRO4FN+j
	Bpfw/Njxve4nmf6nAr8KXlkgXEdZN0GNK0I6Jl0eA5rFXJtw45r2qvP8uZQrTgUgQ/9/3IiBQQ5OT
	nlzCeiTvXENOpnoZVbS6MMxIauT8ItwHyPxrjzI2HlXDFbTb0EAcyL635a+WK7y/DlME=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] pci: do not disable memory decoding for devices
Message-Id: <E1oofP5-00061P-PD@xenbits.xenproject.org>
Date: Sat, 29 Oct 2022 06:33:11 +0000

commit 53d9133638c3f940a53df60352fabb0963d67ad3
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Fri Oct 28 11:40:00 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 11:40:00 2022 +0200

    pci: do not disable memory decoding for devices
    
    Commit 75cc460a1b added checks to ensure the position of the BARs from
    PCI devices don't overlap with regions defined on the memory map.
    When there's a collision memory decoding is left disabled for the
    device, assuming that dom0 will reposition the BAR if necessary and
    enable memory decoding.
    
    While this would be the case for devices being used by dom0, devices
    being used by the firmware itself that have no driver would usually be
    left with memory decoding disabled by dom0 if that's the state dom0
    found them in, and thus firmware trying to make use of them will not
    function correctly.
    
    The initial intent of 75cc460a1b was to prevent vPCI from creating
    MMIO mappings on the dom0 p2m over regions that would otherwise
    already have mappings established.  It's my view now that we likely
    went too far with 75cc460a1b, and Xen disabling memory decoding of
    devices (as buggy as they might be) is harmful, and reduces the set of
    hardware on which Xen works.
    
    This commits reverts most of 75cc460a1b, and instead adds checks to
    vPCI in order to prevent misplaced BARs from being added to the
    hardware domain p2m.  Signaling on whether BARs are mapped is tracked
    in the vpci structure, so that misplaced BARs are not mapped, and thus
    Xen won't attempt to unmap them when memory decoding is disabled.
    
    This restores the behavior of Xen for PV dom0 to the state it was
    previous to 75cc460a1b, while also introducing a more contained fix
    for the vPCI BAR mapping issues.
    
    Fixes: 75cc460a1b ('xen/pci: detect when BARs are not suitably positioned')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/passthrough/pci.c | 69 -------------------------------------------
 xen/drivers/vpci/header.c     | 21 +++++++++++--
 2 files changed, 18 insertions(+), 72 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 149f68bb6e..b42acb8d7c 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -233,9 +233,6 @@ static void check_pdev(const struct pci_dev *pdev)
      PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_REC_MASTER_ABORT | \
      PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_DETECTED_PARITY)
     u16 val;
-    unsigned int nbars = 0, rom_pos = 0, i;
-    static const char warn[] = XENLOG_WARNING
-        "%pp disabled: %sBAR [%#lx, %#lx] overlaps with memory map\n";
 
     if ( command_mask )
     {
@@ -254,8 +251,6 @@ static void check_pdev(const struct pci_dev *pdev)
     switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f )
     {
     case PCI_HEADER_TYPE_BRIDGE:
-        nbars = PCI_HEADER_BRIDGE_NR_BARS;
-        rom_pos = PCI_ROM_ADDRESS1;
         if ( !bridge_ctl_mask )
             break;
         val = pci_conf_read16(pdev->sbdf, PCI_BRIDGE_CONTROL);
@@ -272,75 +267,11 @@ static void check_pdev(const struct pci_dev *pdev)
         }
         break;
 
-    case PCI_HEADER_TYPE_NORMAL:
-        nbars = PCI_HEADER_NORMAL_NR_BARS;
-        rom_pos = PCI_ROM_ADDRESS;
-        break;
-
     case PCI_HEADER_TYPE_CARDBUS:
         /* TODO */
         break;
     }
 #undef PCI_STATUS_CHECK
-
-    /* Check if BARs overlap with other memory regions. */
-    val = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
-    if ( !(val & PCI_COMMAND_MEMORY) || pdev->ignore_bars )
-        return;
-
-    pci_conf_write16(pdev->sbdf, PCI_COMMAND, val & ~PCI_COMMAND_MEMORY);
-    for ( i = 0; i < nbars; )
-    {
-        uint64_t addr, size;
-        unsigned int reg = PCI_BASE_ADDRESS_0 + i * 4;
-        int rc = 1;
-
-        if ( (pci_conf_read32(pdev->sbdf, reg) & PCI_BASE_ADDRESS_SPACE) !=
-             PCI_BASE_ADDRESS_SPACE_MEMORY )
-            goto next;
-
-        rc = pci_size_mem_bar(pdev->sbdf, reg, &addr, &size,
-                              (i == nbars - 1) ? PCI_BAR_LAST : 0);
-        if ( rc < 0 )
-            /* Unable to size, better leave memory decoding disabled. */
-            return;
-        if ( size && !pci_check_bar(pdev, maddr_to_mfn(addr),
-                                    maddr_to_mfn(addr + size - 1)) )
-        {
-            /*
-             * Return without enabling memory decoding if BAR position is not
-             * in IO suitable memory. Let the hardware domain re-position the
-             * BAR.
-             */
-            printk(warn,
-                   &pdev->sbdf, "", PFN_DOWN(addr), PFN_DOWN(addr + size - 1));
-            return;
-        }
-
- next:
-        ASSERT(rc > 0);
-        i += rc;
-    }
-
-    if ( rom_pos &&
-         (pci_conf_read32(pdev->sbdf, rom_pos) & PCI_ROM_ADDRESS_ENABLE) )
-    {
-        uint64_t addr, size;
-        int rc = pci_size_mem_bar(pdev->sbdf, rom_pos, &addr, &size,
-                                  PCI_BAR_ROM);
-
-        if ( rc < 0 )
-            return;
-        if ( size && !pci_check_bar(pdev, maddr_to_mfn(addr),
-                                    maddr_to_mfn(addr + size - 1)) )
-        {
-            printk(warn, &pdev->sbdf, "ROM ", PFN_DOWN(addr),
-                   PFN_DOWN(addr + size - 1));
-            return;
-        }
-    }
-
-    pci_conf_write16(pdev->sbdf, PCI_COMMAND, val);
 }
 
 static void apply_quirks(struct pci_dev *pdev)
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index eb9219a49a..d272b3f343 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -115,13 +115,18 @@ static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
             uint32_t val = bar->addr |
                            (map ? PCI_ROM_ADDRESS_ENABLE : 0);
 
-            bar->enabled = header->rom_enabled = map;
+            if ( pci_check_bar(pdev, _mfn(PFN_DOWN(bar->addr)),
+                               _mfn(PFN_DOWN(bar->addr + bar->size - 1))) )
+                bar->enabled = map;
+            header->rom_enabled = map;
             pci_conf_write32(pdev->sbdf, rom_pos, val);
             return;
         }
 
         if ( !rom_only &&
-             (bar->type != VPCI_BAR_ROM || header->rom_enabled) )
+             (bar->type != VPCI_BAR_ROM || header->rom_enabled) &&
+             pci_check_bar(pdev, _mfn(PFN_DOWN(bar->addr)),
+                           _mfn(PFN_DOWN(bar->addr + bar->size - 1))) )
             bar->enabled = map;
     }
 
@@ -234,9 +239,19 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
 
         if ( !MAPPABLE_BAR(bar) ||
              (rom_only ? bar->type != VPCI_BAR_ROM
-                       : (bar->type == VPCI_BAR_ROM && !header->rom_enabled)) )
+                       : (bar->type == VPCI_BAR_ROM && !header->rom_enabled)) ||
+             /* Skip BARs already in the requested state. */
+             bar->enabled == !!(cmd & PCI_COMMAND_MEMORY) )
             continue;
 
+        if ( !pci_check_bar(pdev, _mfn(start), _mfn(end)) )
+        {
+            printk(XENLOG_G_WARNING
+                   "%pp: not mapping BAR [%lx, %lx] invalid position\n",
+                   &pdev->sbdf, start, end);
+            continue;
+        }
+
         rc = rangeset_add_range(mem, start, end);
         if ( rc )
         {
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 29 06:33:22 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Oct 2022 06:33:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432287.685085 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPG-0005nx-AQ; Sat, 29 Oct 2022 06:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432287.685085; Sat, 29 Oct 2022 06:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPG-0005no-7g; Sat, 29 Oct 2022 06:33:22 +0000
Received: by outflank-mailman (input) for mailman id 432287;
 Sat, 29 Oct 2022 06:33:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPF-0005ng-VB
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPF-00045z-UY
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:21 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPF-00061o-TJ
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:21 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=9i9OMEU6eiskbDbEh4Xb400veVOdB5nPr9XYb8M56O0=; b=swQsGOFWaMoXkQsMWV0aKde3n+
	uvQOcqSVMn0RaUZDWJel5Nb/ZsKduyJeogzhoy6hXn9yjd1rWUrOfsudB2QKKMLyorkyIiOf16Q1t
	4P3K2GffVsudrLq7MHnLX8yKIcoqwMIIzFz06nYt8f9V0KzYoxMX7JxzsHojQYgzVIyw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] vpci: refuse BAR writes only if the BAR is mapped
Message-Id: <E1oofPF-00061o-TJ@xenbits.xenproject.org>
Date: Sat, 29 Oct 2022 06:33:21 +0000

commit 7abd7bc1626d25ada03c1cff2e8c2ce1a5cc3cbf
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Fri Oct 28 11:40:45 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 11:40:45 2022 +0200

    vpci: refuse BAR writes only if the BAR is mapped
    
    Writes to the BARs are ignored if memory decoding is enabled for the
    device, and the same happen with ROM BARs if the write is an attempt
    to change the position of the BAR without disabling it first.
    
    The reason of ignoring such writes is a limitation in Xen, as it would
    need to unmap the BAR, change the address, and remap the BAR at the
    new position, which the current logic doesn't support.
    
    Some devices however seem to (wrongly) have the memory decoding bit
    hardcoded to enabled, and attempts to disable it don't get reflected
    on the command register.
    
    This causes issues for well behaved domains that disable memory
    decoding and then try to size the BARs, as vPCI will think memory
    decoding is still enabled and ignore the write.
    
    Since vPCI doesn't explicitly care about whether the memory decoding
    bit is disabled as long as the BAR is not mapped in the domain p2m use
    the information in the vpci_bar to check whether the BAR is mapped,
    and refuse writes only based on that information.  This workarounds
    the issue, and allows domains to size and reposition the BARs properly.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/drivers/vpci/header.c | 31 +++++++++++++++++++++----------
 xen/include/xen/vpci.h    |  6 ++++++
 2 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index d272b3f343..ec2e978a4e 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -131,7 +131,10 @@ static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
     }
 
     if ( !rom_only )
+    {
         pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
+        header->bars_mapped = map;
+    }
     else
         ASSERT_UNREACHABLE();
 }
@@ -352,13 +355,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
 static void cf_check cmd_write(
     const struct pci_dev *pdev, unsigned int reg, uint32_t cmd, void *data)
 {
-    uint16_t current_cmd = pci_conf_read16(pdev->sbdf, reg);
+    struct vpci_header *header = data;
 
     /*
      * Let Dom0 play with all the bits directly except for the memory
      * decoding one.
      */
-    if ( (cmd ^ current_cmd) & PCI_COMMAND_MEMORY )
+    if ( header->bars_mapped != !!(cmd & PCI_COMMAND_MEMORY) )
         /*
          * Ignore the error. No memory has been added or removed from the p2m
          * (because the actual p2m changes are deferred in defer_map) and the
@@ -385,12 +388,16 @@ static void cf_check bar_write(
     else
         val &= PCI_BASE_ADDRESS_MEM_MASK;
 
-    if ( pci_conf_read16(pdev->sbdf, PCI_COMMAND) & PCI_COMMAND_MEMORY )
+    /*
+     * Xen only cares whether the BAR is mapped into the p2m, so allow BAR
+     * writes as long as the BAR is not mapped into the p2m.
+     */
+    if ( bar->enabled )
     {
         /* If the value written is the current one avoid printing a warning. */
         if ( val != (uint32_t)(bar->addr >> (hi ? 32 : 0)) )
             gprintk(XENLOG_WARNING,
-                    "%pp: ignored BAR %zu write with memory decoding enabled\n",
+                    "%pp: ignored BAR %zu write while mapped\n",
                     &pdev->sbdf, bar - pdev->vpci->header.bars + hi);
         return;
     }
@@ -419,25 +426,29 @@ static void cf_check rom_write(
 {
     struct vpci_header *header = &pdev->vpci->header;
     struct vpci_bar *rom = data;
-    uint16_t cmd = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
     bool new_enabled = val & PCI_ROM_ADDRESS_ENABLE;
 
-    if ( (cmd & PCI_COMMAND_MEMORY) && header->rom_enabled && new_enabled )
+    /*
+     * See comment in bar_write(). Additionally since the ROM BAR has an enable
+     * bit some writes are allowed while the BAR is mapped, as long as the
+     * write is to unmap the ROM BAR.
+     */
+    if ( rom->enabled && new_enabled )
     {
         gprintk(XENLOG_WARNING,
-                "%pp: ignored ROM BAR write with memory decoding enabled\n",
+                "%pp: ignored ROM BAR write while mapped\n",
                 &pdev->sbdf);
         return;
     }
 
-    if ( !header->rom_enabled )
+    if ( !rom->enabled )
         /*
-         * If the ROM BAR is not enabled update the address field so the
+         * If the ROM BAR is not mapped update the address field so the
          * correct address is mapped into the p2m.
          */
         rom->addr = val & PCI_ROM_ADDRESS_MASK;
 
-    if ( !(cmd & PCI_COMMAND_MEMORY) || header->rom_enabled == new_enabled )
+    if ( !header->bars_mapped || rom->enabled == new_enabled )
     {
         /* Just update the ROM BAR field. */
         header->rom_enabled = new_enabled;
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index 67c9a0c631..d8acfeba8a 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -88,6 +88,12 @@ struct vpci {
          * is mapped into guest p2m) if there's a ROM BAR on the device.
          */
         bool rom_enabled      : 1;
+        /*
+         * Cache whether memory decoding is enabled from our PoV.
+         * Some devices have a sticky memory decoding so that can't be relied
+         * upon to know whether BARs are mapped into the guest p2m.
+         */
+        bool bars_mapped      : 1;
         /* FIXME: currently there's no support for SR-IOV. */
     } header;
 
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 29 06:33:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Oct 2022 06:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432288.685089 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPR-0005rR-CT; Sat, 29 Oct 2022 06:33:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432288.685089; Sat, 29 Oct 2022 06:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPR-0005rJ-9U; Sat, 29 Oct 2022 06:33:33 +0000
Received: by outflank-mailman (input) for mailman id 432288;
 Sat, 29 Oct 2022 06:33:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPQ-0005r7-2B
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPQ-00046B-1O
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPQ-00062J-0b
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=QWCbWhO4FSVQg+Y8aep60tr04eYGhOxZn1VSV4h2hEE=; b=GGLPwoaq0G/QlBm4jlEAxZJr6M
	bvNJHY3FnEVw7qhEKGPL4ifhm0GY4esPNRA0zTbPiSAYd7clbu/0480PdS+gIHa9wg3MF+7F1HTiy
	5wR2g/SChednVc1TpqUzhHF3OLeZiy0mHdA3qXatN8G8CarElAJuHOADmG6TpTTV+wmU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/pv-shim: correctly ignore empty onlining requests
Message-Id: <E1oofPQ-00062J-0b@xenbits.xenproject.org>
Date: Sat, 29 Oct 2022 06:33:32 +0000

commit 9272225ca72801fd9fa5b268a2d1c5adebd19cd9
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Fri Oct 28 15:47:59 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 15:47:59 2022 +0200

    x86/pv-shim: correctly ignore empty onlining requests
    
    Mem-op requests may have zero extents. Such requests need treating as
    no-ops. pv_shim_online_memory(), however, would have tried to take 2³²-1
    order-sized pages from its balloon list (to then populate them),
    typically ending when the entire set of ballooned pages of this order
    was consumed.
    
    Note that pv_shim_offline_memory() does not have such an issue.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/x86/pv/shim.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 49ce4f93f2..ae1a0e6e65 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -944,6 +944,9 @@ void pv_shim_online_memory(unsigned int nr, unsigned int order)
     struct page_info *page, *tmp;
     PAGE_LIST_HEAD(list);
 
+    if ( !nr )
+        return;
+
     spin_lock(&balloon_lock);
     page_list_for_each_safe ( page, tmp, &balloon )
     {
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 29 06:33:43 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Oct 2022 06:33:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432289.685093 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPb-0005u2-E4; Sat, 29 Oct 2022 06:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432289.685093; Sat, 29 Oct 2022 06:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPb-0005tt-BH; Sat, 29 Oct 2022 06:33:43 +0000
Received: by outflank-mailman (input) for mailman id 432289;
 Sat, 29 Oct 2022 06:33:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPa-0005tk-5E
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPa-00046T-4T
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPa-00062i-3Y
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=VzDx4aXNe6RSDpS5jl+XH7KV/1QL8hKn++PGAGTKJVs=; b=kY77J2lPMJ8vL6VglQNrJhDguN
	bIoqf+zea7ok3ZAXtOT48sSW+aqQJpnTWAKXbGFngwz21sk52OoEs5fknR14bpNStvkTnb3H56J9i
	ZNEdTTLZIYbRWzjJQ4J5Jh6gICDX4uQVenanfQwP0+Nrj7DNN2dFOYHThAMe6gddJIAs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/pv-shim: correct ballooning up for compat guests
Message-Id: <E1oofPa-00062i-3Y@xenbits.xenproject.org>
Date: Sat, 29 Oct 2022 06:33:42 +0000

commit a0bfdd201ea12aa5679bb8944d63a4e0d3c23160
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Fri Oct 28 15:48:50 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 15:48:50 2022 +0200

    x86/pv-shim: correct ballooning up for compat guests
    
    From: Igor Druzhinin <igor.druzhinin@citrix.com>
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. It's only that case when the function would
    issue a call to pv_shim_online_memory(), yet the range then covers only
    the first sub-range that results from the split.
    
    Address that breakage by making a complementary call to
    pv_shim_online_memory() in compat layer.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/compat/memory.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 56c7de1dea..8ca63ceda6 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -7,6 +7,7 @@ EMIT_FILE;
 #include <xen/event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
+#include <asm/guest.h>
 #include <compat/memory.h>
 
 #define xen_domid_t domid_t
@@ -146,7 +147,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 nat.rsrv->nr_extents = end_extent;
                 ++split;
             }
-
+           /* Avoid calling pv_shim_online_memory() when in a continuation. */
+           if ( pv_shim && op != XENMEM_decrease_reservation && !start_extent )
+               pv_shim_online_memory(cmp.rsrv.nr_extents - nat.rsrv->nr_extents,
+                                     cmp.rsrv.extent_order);
             break;
 
         case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Sat Oct 29 06:33:53 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Oct 2022 06:33:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432290.685097 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPl-0005wa-Fk; Sat, 29 Oct 2022 06:33:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432290.685097; Sat, 29 Oct 2022 06:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1oofPl-0005wS-Co; Sat, 29 Oct 2022 06:33:53 +0000
Received: by outflank-mailman (input) for mailman id 432290;
 Sat, 29 Oct 2022 06:33:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPk-0005wI-8A
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPk-00046t-7P
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:52 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1oofPk-00063u-6p
 for xen-changelog@lists.xenproject.org; Sat, 29 Oct 2022 06:33:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=aD591P/pq2R+JcUFq15uMzY1hmrooLaStVYm8aNdvW8=; b=BjrnamAA5/nsLH/hLFwhLSlpHW
	D798YWgCsKzoXDstsQLRzvm1UXsrYigdLJiD204B3pgqS0jQ+s2buZs64A7/L9PcIAmDcs6lLzTBy
	XvCHJCXqt5LpStRA4ntJbW1TcLe8mrwr1RubjmSB6qvvZ4G0ewOusoKazU7D95rH6zoE=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen master] x86/pv-shim: correct ballooning down for compat guests
Message-Id: <E1oofPk-00063u-6p@xenbits.xenproject.org>
Date: Sat, 29 Oct 2022 06:33:52 +0000

commit 1d7fbc535d1d37bdc2cc53ede360b0f6651f7de1
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Fri Oct 28 15:49:33 2022 +0200
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 28 15:49:33 2022 +0200

    x86/pv-shim: correct ballooning down for compat guests
    
    From: Igor Druzhinin <igor.druzhinin@citrix.com>
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. In order to be usable as overall result, the
    function accumulates args.nr_done, i.e. it initialized the field with
    the start extent. Therefore non-initial requests resulting from the
    split would pass too large a number into pv_shim_offline_memory().
    
    Address that breakage by always calling pv_shim_offline_memory()
    regardless of current hypercall preemption status, with a suitably
    adjusted first argument. Note that this is correct also for the native
    guest case: We now simply "commit" what was completed right away, rather
    than at the end of a series of preemption/re-start cycles. In fact this
    improves overall preemption behavior: There's no longer a potentially
    big chunk of work done non-preemptively at the end of the last
    "iteration".
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/common/memory.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index ae8163a738..a15e5580f3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1461,22 +1461,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         rc = args.nr_done;
 
-        if ( args.preempted )
-            return hypercall_create_continuation(
-                __HYPERVISOR_memory_op, "lh",
-                op | (rc << MEMOP_EXTENT_SHIFT), arg);
-
 #ifdef CONFIG_X86
         if ( pv_shim && op == XENMEM_decrease_reservation )
-            /*
-             * Only call pv_shim_offline_memory when the hypercall has
-             * finished. Note that nr_done is used to cope in case the
-             * hypercall has failed and only part of the extents where
-             * processed.
-             */
-            pv_shim_offline_memory(args.nr_done, args.extent_order);
+            pv_shim_offline_memory(args.nr_done - start_extent,
+                                   args.extent_order);
 #endif
 
+        if ( args.preempted )
+           return hypercall_create_continuation(
+                __HYPERVISOR_memory_op, "lh",
+                op | (rc << MEMOP_EXTENT_SHIFT), arg);
+
         break;
 
     case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#master


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:33:10 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:33:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432819.685439 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyT-00015w-H0; Mon, 31 Oct 2022 12:33:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432819.685439; Mon, 31 Oct 2022 12:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyT-00015o-Dl; Mon, 31 Oct 2022 12:33:05 +0000
Received: by outflank-mailman (input) for mailman id 432819;
 Mon, 31 Oct 2022 12:33:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyS-00015i-DS
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyS-0000wC-9x
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyS-0001nm-8o
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=pjKXnVAXK+jak9ao0pbHyth9k7kzHHzzEEupZc7f2eU=; b=O1AqAFTrt04wfNTC2VvQOH766t
	W4zMNB6iYro8D/5oh/2KWPmiPssJBQVdN3XOr7Hpb/QqQkT6ssT4PBq1n+BeJMUKcQJgj+NqgYEb3
	7SXWIjq1LXhSYIA8gmsyADOD7/nKf5vD+VNO9nOLhXnunkOILcMiR3LD9bwVT39xvkKk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86emul: respect NSCB
Message-Id: <E1opTyS-0001nm-8o@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:33:04 +0000

commit 5dae06578cd5dcc312175b00ed6836a85732438d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:19:35 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:19:35 2022 +0100

    x86emul: respect NSCB
    
    protmode_load_seg() would better adhere to that "feature" of clearing
    base (and limit) during NULL selector loads.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 87a20c98d9f0f422727fe9b4b9e22c2c43a5cd9c
    master date: 2022-10-11 14:30:41 +0200
---
 xen/arch/x86/x86_emulate/x86_emulate.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 441086ea86..847f8f3771 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1970,6 +1970,7 @@ amd_like(const struct x86_emulate_ctxt *ctxt)
 #define vcpu_has_tbm()         (ctxt->cpuid->extd.tbm)
 #define vcpu_has_clzero()      (ctxt->cpuid->extd.clzero)
 #define vcpu_has_wbnoinvd()    (ctxt->cpuid->extd.wbnoinvd)
+#define vcpu_has_nscb()        (ctxt->cpuid->extd.nscb)
 
 #define vcpu_has_bmi1()        (ctxt->cpuid->feat.bmi1)
 #define vcpu_has_hle()         (ctxt->cpuid->feat.hle)
@@ -2102,7 +2103,7 @@ protmode_load_seg(
         case x86_seg_tr:
             goto raise_exn;
         }
-        if ( !_amd_like(cp) || !ops->read_segment ||
+        if ( !_amd_like(cp) || vcpu_has_nscb() || !ops->read_segment ||
              ops->read_segment(seg, sreg, ctxt) != X86EMUL_OKAY )
             memset(sreg, 0, sizeof(*sreg));
         else
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:33:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:33:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432820.685442 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyd-00017p-Hx; Mon, 31 Oct 2022 12:33:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432820.685442; Mon, 31 Oct 2022 12:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyd-00017h-FL; Mon, 31 Oct 2022 12:33:15 +0000
Received: by outflank-mailman (input) for mailman id 432820;
 Mon, 31 Oct 2022 12:33:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyc-00017Z-EH
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyc-0000wJ-Da
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyc-0001oD-CX
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=E6XE8IE6PdsKXNHc3X/tBfjzGxsj0whr1I6RZaFpZGU=; b=k8yAuYmq5c7AU4qx8BrBWilXY2
	obi8WBOs2BnMEqNZ2Z+MGBf4ilr1iIz+8lLvljY0TZ/BBBRPlJCw8F/Lpw1d78au2O49R77ngwC8m
	/U/QCBWE+1fKSQBVaWnli3HxQBNgO0p28MGBPKLAr1Q8QjeNppznF/O7txTQTCjjY8Nc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] VMX: correct error handling in vmx_create_vmcs()
Message-Id: <E1opTyc-0001oD-CX@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:33:14 +0000

commit 02ab5e97c41d275ccea0910b1d8bce41ed1be5bf
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:20:40 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:20:40 2022 +0100

    VMX: correct error handling in vmx_create_vmcs()
    
    With the addition of vmx_add_msr() calls to construct_vmcs() there are
    now cases where simply freeing the VMCS isn't enough: The MSR bitmap
    page as well as one of the MSR area ones (if it's the 2nd vmx_add_msr()
    which fails) may also need freeing. Switch to using vmx_destroy_vmcs()
    instead.
    
    Fixes: 3bd36952dab6 ("x86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests")
    Fixes: 53a570b28569 ("x86/spec-ctrl: Support IBPB-on-entry")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    master commit: 448d28309f1a966bdc850aff1a637e0b79a03e43
    master date: 2022-10-12 17:57:56 +0200
---
 xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index dd817cee4e..237b13459d 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1831,7 +1831,7 @@ int vmx_create_vmcs(struct vcpu *v)
 
     if ( (rc = construct_vmcs(v)) != 0 )
     {
-        vmx_free_vmcs(vmx->vmcs_pa);
+        vmx_destroy_vmcs(v);
         return rc;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:33:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:33:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432821.685446 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyn-0001Ab-JO; Mon, 31 Oct 2022 12:33:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432821.685446; Mon, 31 Oct 2022 12:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyn-0001AT-Gt; Mon, 31 Oct 2022 12:33:25 +0000
Received: by outflank-mailman (input) for mailman id 432821;
 Mon, 31 Oct 2022 12:33:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTym-0001AJ-HO
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTym-0000ws-Gd
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:24 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTym-0001p6-Ft
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:24 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=45eBMMGLGWZ0BQQ6Fl3zc1ggCOI8L/jm2kAojcX7U+4=; b=Tmp25Sw0NgI7qExaji/h0CDKQG
	ZbSpDXvanpa6iovDNVY+wX5a5FOg6FadBi/GJox8FvvBhQyigioKmnufblcyMmRDflJVvF+N9zsLX
	i+721iHrrhLdvbFNTzAd6+vvtS8eaiMZnosPPusd8vjryz8sYEVzenZOaIhtWeovNQUg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] argo: Remove reachable ASSERT_UNREACHABLE
Message-Id: <E1opTym-0001p6-Ft@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:33:24 +0000

commit d4a11d6a22cf73ac7441750e5e8113779348885e
Author:     Jason Andryuk <jandryuk@gmail.com>
AuthorDate: Mon Oct 31 13:21:31 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:21:31 2022 +0100

    argo: Remove reachable ASSERT_UNREACHABLE
    
    I observed this ASSERT_UNREACHABLE in partner_rings_remove consistently
    trip.  It was in OpenXT with the viptables patch applied.
    
    dom10 shuts down.
    dom7 is REJECTED sending to dom10.
    dom7 shuts down and this ASSERT trips for dom10.
    
    The argo_send_info has a domid, but there is no refcount taken on
    the domain.  Therefore it's not appropriate to ASSERT that the domain
    can be looked up via domid.  Replace with a debug message.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Christopher Clark <christopher.w.clark@gmail.com>
    master commit: 197f612b77c5afe04e60df2100a855370d720ad7
    master date: 2022-10-14 14:45:41 +0100
---
 xen/common/argo.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/argo.c b/xen/common/argo.c
index eaea7ba888..80f3275092 100644
--- a/xen/common/argo.c
+++ b/xen/common/argo.c
@@ -1298,7 +1298,8 @@ partner_rings_remove(struct domain *src_d)
                     ASSERT_UNREACHABLE();
             }
             else
-                ASSERT_UNREACHABLE();
+                argo_dprintk("%pd has entry for stale partner d%u\n",
+                             src_d, send_info->id.domain_id);
 
             if ( dst_d )
                 rcu_unlock_domain(dst_d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:33:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432822.685451 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyx-0001DQ-Kz; Mon, 31 Oct 2022 12:33:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432822.685451; Mon, 31 Oct 2022 12:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTyx-0001DI-IQ; Mon, 31 Oct 2022 12:33:35 +0000
Received: by outflank-mailman (input) for mailman id 432822;
 Mon, 31 Oct 2022 12:33:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyw-0001DA-KO
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyw-0000x4-Jh
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:34 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTyw-0001pb-Iv
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:34 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=kXq4YnvgGQJ6gbwFOTCqg1c1koQXJQbGlOtyDGXVwJc=; b=0lQSAUhOMeyknoCjQ9HZbkR0cP
	vGCnpfNTfK9OWBqWqTp7EOjjcnPFpTSOQiWt9YdBc89ofAZiZIyI7s1Rp/eAOF5UH3etZH+PJeCCx
	zlL/8Dp63rWm3SXgJJjP2OkpfEwICYCiUehHUPvywDzbJdeo6/JMPUu4ukI4b0q3c3Ag=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] EFI: don't convert memory marked for runtime use to ordinary RAM
Message-Id: <E1opTyw-0001pb-Iv@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:33:34 +0000

commit 54f8ed80c8308e65c3f57ae6cbd130f43f5ecbbd
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:22:17 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:22:17 2022 +0100

    EFI: don't convert memory marked for runtime use to ordinary RAM
    
    efi_init_memory() in both relevant places is treating EFI_MEMORY_RUNTIME
    higher priority than the type of the range. To avoid accessing memory at
    runtime which was re-used for other purposes, make
    efi_arch_process_memory_map() follow suit. While in theory the same would
    apply to EfiACPIReclaimMemory, we don't actually "reclaim" or clobber
    that memory (converted to E820_ACPI on x86) there (and it would be a bug
    if the Dom0 kernel tried to reclaim the range, bypassing Xen's memory
    management, plus it would be at least bogus if it clobbered that space),
    hence that type's handling can be left alone.
    
    Fixes: bf6501a62e80 ("x86-64: EFI boot code")
    Fixes: facac0af87ef ("x86-64: EFI runtime code")
    Fixes: 6d70ea10d49f ("Add ARM EFI boot support")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    master commit: f324300c8347b6aa6f9c0b18e0a90bbf44011a9a
    master date: 2022-10-21 12:30:24 +0200
---
 xen/arch/arm/efi/efi-boot.h | 3 ++-
 xen/arch/x86/efi/efi-boot.h | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index 9f26798239..849071fe53 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -194,7 +194,8 @@ static EFI_STATUS __init efi_process_memory_map_bootinfo(EFI_MEMORY_DESCRIPTOR *
 
     for ( Index = 0; Index < (mmap_size / desc_size); Index++ )
     {
-        if ( desc_ptr->Attribute & EFI_MEMORY_WB &&
+        if ( !(desc_ptr->Attribute & EFI_MEMORY_RUNTIME) &&
+             (desc_ptr->Attribute & EFI_MEMORY_WB) &&
              (desc_ptr->Type == EfiConventionalMemory ||
               desc_ptr->Type == EfiLoaderCode ||
               desc_ptr->Type == EfiLoaderData ||
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 4ee77fb9bf..d996016223 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -185,7 +185,9 @@ static void __init efi_arch_process_memory_map(EFI_SYSTEM_TABLE *SystemTable,
             /* fall through */
         case EfiLoaderCode:
         case EfiLoaderData:
-            if ( desc->Attribute & EFI_MEMORY_WB )
+            if ( desc->Attribute & EFI_MEMORY_RUNTIME )
+                type = E820_RESERVED;
+            else if ( desc->Attribute & EFI_MEMORY_WB )
                 type = E820_RAM;
             else
         case EfiUnusableMemory:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:33:45 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:33:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432823.685455 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTz7-0001Ft-Mi; Mon, 31 Oct 2022 12:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432823.685455; Mon, 31 Oct 2022 12:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTz7-0001Fl-Jv; Mon, 31 Oct 2022 12:33:45 +0000
Received: by outflank-mailman (input) for mailman id 432823;
 Mon, 31 Oct 2022 12:33:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTz6-0001FT-NQ
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTz6-0000xK-Mj
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:44 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTz6-0001q4-Ls
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:44 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=i64eaOYXVyg+5m+ljiDIF8/p1ZFC+Rz5YFOHl4GRxZ8=; b=QTAJREwY3/g0vri1/WIkIhPida
	Rn70/NU8rgeZcKoZdYgGLPU341MfUTFHBVh9yJPh/kHR16vZB0xzfXXTg0tH/al7kCmgRCh+GNOap
	DzPpGujG7D3Xv30KUf3bSHP3rqa7aCgNIGVbMCn2bcEIb/wajQRK1x7JQ1ANaRsk/cRU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/sched: fix race in RTDS scheduler
Message-Id: <E1opTz6-0001q4-Ls@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:33:44 +0000

commit 481465f35da1bcec0b2a4dfd6fc51d86cac28547
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon Oct 31 13:22:54 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:22:54 2022 +0100

    xen/sched: fix race in RTDS scheduler
    
    When a domain gets paused the unit runnable state can change to "not
    runnable" without the scheduling lock being involved. This means that
    a specific scheduler isn't involved in this change of runnable state.
    
    In the RTDS scheduler this can result in an inconsistency in case a
    unit is losing its "runnable" capability while the RTDS scheduler's
    scheduling function is active. RTDS will remove the unit from the run
    queue, but doesn't do so for the replenish queue, leading to hitting
    an ASSERT() in replq_insert() later when the domain is unpaused again.
    
    Fix that by removing the unit from the replenish queue as well in this
    case.
    
    Fixes: 7c7b407e7772 ("xen/sched: introduce unit_runnable_state()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    master commit: 73c62927f64ecb48f27d06176befdf76b879f340
    master date: 2022-10-21 12:32:23 +0200
---
 xen/common/sched/rt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index c24cd2ac32..ec2ca1bebc 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -1087,6 +1087,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit,
         else if ( !unit_runnable_state(snext->unit) )
         {
             q_remove(snext);
+            replq_remove(ops, snext);
             snext = rt_unit(sched_idle_unit(sched_cpu));
         }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:33:55 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:33:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432824.685459 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzH-0001IP-O8; Mon, 31 Oct 2022 12:33:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432824.685459; Mon, 31 Oct 2022 12:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzH-0001IH-LS; Mon, 31 Oct 2022 12:33:55 +0000
Received: by outflank-mailman (input) for mailman id 432824;
 Mon, 31 Oct 2022 12:33:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzG-0001I8-Qg
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzG-0000xU-Q3
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:54 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzG-0001sU-PB
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:33:54 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=yncXwiPoWjAPpylDanUeRBf+DTANM2R2lsWfGaSNH0M=; b=eyp+JOKsoo6u+daJ83ngnCO4KB
	tVTprOhIws/qkOwNXb59BMTZN/hj8FY/V/QKNZjhjv9CoAVdl6TGQ6L5HunaL+8Jwfj98z+gnWV2R
	yGfjIAN/R0crzN50x2ObQjHLap6IlTzWJpSbkvgJ1fAqiTvsO4dwo2zxrSTekuIj7Tgs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] xen/sched: fix restore_vcpu_affinity() by removing it
Message-Id: <E1opTzG-0001sU-PB@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:33:54 +0000

commit 88f2bf5de9ad789e1c61b5d5ecf118909eed6917
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon Oct 31 13:23:50 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:23:50 2022 +0100

    xen/sched: fix restore_vcpu_affinity() by removing it
    
    When the system is coming up after having been suspended,
    restore_vcpu_affinity() is called for each domain in order to adjust
    the vcpu's affinity settings in case a cpu didn't come to live again.
    
    The way restore_vcpu_affinity() is doing that is wrong, because the
    specific scheduler isn't being informed about a possible migration of
    the vcpu to another cpu. Additionally the migration is often even
    happening if all cpus are running again, as it is done without check
    whether it is really needed.
    
    As cpupool management is already calling cpu_disable_scheduler() for
    cpus not having come up again, and cpu_disable_scheduler() is taking
    care of eventually needed vcpu migration in the proper way, there is
    simply no need for restore_vcpu_affinity().
    
    So just remove restore_vcpu_affinity() completely, together with the
    no longer used sched_reset_affinity_broken().
    
    Fixes: 8a04eaa8ea83 ("xen/sched: move some per-vcpu items to struct sched_unit")
    Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    master commit: fce1f381f7388daaa3e96dbb0d67d7a3e4bb2d2d
    master date: 2022-10-24 11:16:27 +0100
---
 xen/arch/x86/acpi/power.c |  3 --
 xen/common/sched/core.c   | 78 -----------------------------------------------
 xen/include/xen/sched.h   |  1 -
 3 files changed, 82 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index dd397f7130..1a7baeebe6 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -159,10 +159,7 @@ static void thaw_domains(void)
 
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
-    {
-        restore_vcpu_affinity(d);
         domain_unpause(d);
-    }
     rcu_read_unlock(&domlist_read_lock);
 }
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 900aab8f66..9173cf690c 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1188,84 +1188,6 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit)
     return false;
 }
 
-static void sched_reset_affinity_broken(const struct sched_unit *unit)
-{
-    struct vcpu *v;
-
-    for_each_sched_unit_vcpu ( unit, v )
-        v->affinity_broken = false;
-}
-
-void restore_vcpu_affinity(struct domain *d)
-{
-    unsigned int cpu = smp_processor_id();
-    struct sched_unit *unit;
-
-    ASSERT(system_state == SYS_STATE_resume);
-
-    rcu_read_lock(&sched_res_rculock);
-
-    for_each_sched_unit ( d, unit )
-    {
-        spinlock_t *lock;
-        unsigned int old_cpu = sched_unit_master(unit);
-        struct sched_resource *res;
-
-        ASSERT(!unit_runnable(unit));
-
-        /*
-         * Re-assign the initial processor as after resume we have no
-         * guarantee the old processor has come back to life again.
-         *
-         * Therefore, here, before actually unpausing the domains, we should
-         * set v->processor of each of their vCPUs to something that will
-         * make sense for the scheduler of the cpupool in which they are in.
-         */
-        lock = unit_schedule_lock_irq(unit);
-
-        cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                    cpupool_domain_master_cpumask(d));
-        if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-        {
-            if ( sched_check_affinity_broken(unit) )
-            {
-                sched_set_affinity(unit, unit->cpu_hard_affinity_saved, NULL);
-                sched_reset_affinity_broken(unit);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-
-            if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-            {
-                /* Affinity settings of one vcpu are for the complete unit. */
-                printk(XENLOG_DEBUG "Breaking affinity for %pv\n",
-                       unit->vcpu_list);
-                sched_set_affinity(unit, &cpumask_all, NULL);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-        }
-
-        res = get_sched_res(cpumask_any(cpumask_scratch_cpu(cpu)));
-        sched_set_res(unit, res);
-
-        spin_unlock_irq(lock);
-
-        /* v->processor might have changed, so reacquire the lock. */
-        lock = unit_schedule_lock_irq(unit);
-        res = sched_pick_resource(unit_scheduler(unit), unit);
-        sched_set_res(unit, res);
-        spin_unlock_irq(lock);
-
-        if ( old_cpu != sched_unit_master(unit) )
-            sched_move_irqs(unit);
-    }
-
-    rcu_read_unlock(&sched_res_rculock);
-
-    domain_update_node_affinity(d);
-}
-
 /*
  * This function is used by cpu_hotplug code via cpu notifier chain
  * and from cpupools to switch schedulers on a cpu.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3f4225738a..1a1fab5239 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -999,7 +999,6 @@ void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:34:05 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:34:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432825.685463 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzR-0001Kb-Px; Mon, 31 Oct 2022 12:34:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432825.685463; Mon, 31 Oct 2022 12:34:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzR-0001KU-Mu; Mon, 31 Oct 2022 12:34:05 +0000
Received: by outflank-mailman (input) for mailman id 432825;
 Mon, 31 Oct 2022 12:34:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzQ-0001KL-Tu
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzQ-0000xt-TF
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzQ-0001tE-SP
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=GQN1IftdYqE5dklxDt8vAciEHEQMKhwSAnsX03tyKjg=; b=2uUM4zqegiveonAijARq/ZgVEO
	ensHTl5n4rla9pt6+Z6MEJdLm53mEbsa7iguDjHm/KpElt7iO5Kzjg70Wm4d+iesoJQzDzfZHMySS
	2n7b/LQNEPozUfT/FHNauurJ3XEXEN/57Xf967PRWoEwMCISt2qHlvWse6xHmctfcAl0=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/shadow: drop (replace) bogus assertions
Message-Id: <E1opTzQ-0001tE-SP@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:34:04 +0000

commit 9fdb4f17656f74b35af0882b558e44832ff00b5f
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:24:33 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:24:33 2022 +0100

    x86/shadow: drop (replace) bogus assertions
    
    The addition of a call to shadow_blow_tables() from shadow_teardown()
    has resulted in the "no vcpus" related assertion becoming triggerable:
    If domain_create() fails with at least one page successfully allocated
    in the course of shadow_enable(), or if domain_create() succeeds and
    the domain is then killed without ever invoking XEN_DOMCTL_max_vcpus.
    Note that in-tree tests (test-resource and test-tsx) do exactly the
    latter of these two.
    
    The assertion's comment was bogus anyway: Shadow mode has been getting
    enabled before allocation of vCPU-s for quite some time. Convert the
    assertion to a conditional: As long as there are no vCPU-s, there's
    nothing to blow away.
    
    Fixes: e7aa55c0aab3 ("x86/p2m: free the paging memory pool preemptively")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    
    A similar assertion/comment pair exists in _shadow_prealloc(); the
    comment is similarly bogus, and the assertion could in principle trigger
    e.g. when shadow_alloc_p2m_page() is called early enough. Replace those
    at the same time by a similar early return, here indicating failure to
    the caller (which will generally lead to the domain being crashed in
    shadow_prealloc()).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a92dc2bb30ba65ae25d2f417677eb7ef9a6a0fef
    master date: 2022-10-24 15:46:11 +0200
---
 xen/arch/x86/mm/shadow/common.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 3b0d781991..1de0139742 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -943,8 +943,9 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
         /* No reclaim when the domain is dying, teardown will take care of it. */
         return false;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to reclaim when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return false;
 
     /* Stage one: walk the list of pinned pages, unpinning them */
     perfc_incr(shadow_prealloc_1);
@@ -1034,8 +1035,9 @@ void shadow_blow_tables(struct domain *d)
     mfn_t smfn;
     int i;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to do when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return;
 
     /* Pass one: unpin all pinned pages */
     foreach_pinned_shadow(d, sp, t)
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:34:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:34:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432826.685466 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzb-0001OM-Sc; Mon, 31 Oct 2022 12:34:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432826.685466; Mon, 31 Oct 2022 12:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzb-0001OF-Q0; Mon, 31 Oct 2022 12:34:15 +0000
Received: by outflank-mailman (input) for mailman id 432826;
 Mon, 31 Oct 2022 12:34:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzb-0001Ny-0i
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzb-0000y3-02
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTza-0001uQ-Va
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=AxvIDpd8e7KUfi2FOLFsVV4jZS2Eqwb9KVg0uvAdKTU=; b=5paZm3u2gXZW10b9IYkIl/EKb7
	V3NA4FYHgn04hOwIjujHjP5tZWJ5lDgCFreRCbUDE0VpxL0T5ZRFi17PO8J2i9vkgYoNVtbDWFp+j
	MJRduY3obsxgCK+qqhzcosAJosl/DfESU8YGwjZk1WjAR5cjRjXq62kHi6Bw1lh7PeFg=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] vpci: don't assume that vpci per-device data exists unconditionally
Message-Id: <E1opTza-0001uQ-Va@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:34:14 +0000

commit 96d26f11f56e83b98ec184f4e0d17161efe3a927
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Mon Oct 31 13:25:13 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:25:13 2022 +0100

    vpci: don't assume that vpci per-device data exists unconditionally
    
    It's possible for a device to be assigned to a domain but have no
    vpci structure if vpci_process_pending() failed and called
    vpci_remove_device() as a result.  The unconditional accesses done by
    vpci_{read,write}() and vpci_remove_device() to pdev->vpci would
    then trigger a NULL pointer dereference.
    
    Add checks for pdev->vpci presence in the affected functions.
    
    Fixes: 9c244fdef7 ('vpci: add header handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6ccb5e308ceeb895fbccd87a528a8bd24325aa39
    master date: 2022-10-26 14:55:30 +0200
---
 xen/drivers/vpci/vpci.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index dfc8136ffb..53d78d5391 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -37,7 +37,7 @@ extern vpci_register_init_t *const __end_vpci_array[];
 
 void vpci_remove_device(struct pci_dev *pdev)
 {
-    if ( !has_vpci(pdev->domain) )
+    if ( !has_vpci(pdev->domain) || !pdev->vpci )
         return;
 
     spin_lock(&pdev->vpci->lock);
@@ -326,7 +326,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
 
     /* Find the PCI dev matching the address. */
     pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
         return vpci_read_hw(sbdf, reg, size);
 
     spin_lock(&pdev->vpci->lock);
@@ -436,7 +436,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
      * Passthrough everything that's not trapped.
      */
     pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
     {
         vpci_write_hw(sbdf, reg, size, data);
         return;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:34:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432827.685471 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzm-0001RH-UN; Mon, 31 Oct 2022 12:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432827.685471; Mon, 31 Oct 2022 12:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzm-0001R7-Rc; Mon, 31 Oct 2022 12:34:26 +0000
Received: by outflank-mailman (input) for mailman id 432827;
 Mon, 31 Oct 2022 12:34:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzl-0001Qw-3w
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzl-0000yU-3E
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzl-0001vG-2L
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=rbwPCoYivEbT/IvF2w3wmZ6WPPqPFtcBbtJMOKolk3c=; b=ot321VaQsVUvejwPzbA90emEAt
	5n9jeL8USvTqa7fRfCcgfjaI2mPHNLspmcPM/RkKvnrO0reL93eA1H415DuXa2tJi/WSAkOLby72Y
	sWlui+AFSkJORdt2jaYhbsVWuhjgV2kqucBKlfVbZDXY5nKMggVaRoXoCgjEHNBGYdSs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] vpci/msix: remove from table list on detach
Message-Id: <E1opTzl-0001vG-2L@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:34:25 +0000

commit 8f3f8f20de5cea704671d4ca83f2dceb93ab98d8
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Mon Oct 31 13:25:40 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:25:40 2022 +0100

    vpci/msix: remove from table list on detach
    
    Teardown of MSIX vPCI related data doesn't currently remove the MSIX
    device data from the list of MSIX tables handled by the domain,
    leading to a use-after-free of the data in the msix structure.
    
    Remove the structure from the list before freeing in order to solve
    it.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Fixes: d6281be9d0 ('vpci/msix: add MSI-X handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c14aea137eab29eb9c30bfad745a00c65ad21066
    master date: 2022-10-26 14:56:58 +0200
---
 xen/drivers/vpci/vpci.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 53d78d5391..b9339f8f3e 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -51,8 +51,12 @@ void vpci_remove_device(struct pci_dev *pdev)
         xfree(r);
     }
     spin_unlock(&pdev->vpci->lock);
-    if ( pdev->vpci->msix && pdev->vpci->msix->pba )
-        iounmap(pdev->vpci->msix->pba);
+    if ( pdev->vpci->msix )
+    {
+        list_del(&pdev->vpci->msix->next);
+        if ( pdev->vpci->msix->pba )
+            iounmap(pdev->vpci->msix->pba);
+    }
     xfree(pdev->vpci->msix);
     xfree(pdev->vpci->msi);
     xfree(pdev->vpci);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:34:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432828.685475 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzw-0001UL-Vw; Mon, 31 Oct 2022 12:34:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432828.685475; Mon, 31 Oct 2022 12:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opTzw-0001UD-TB; Mon, 31 Oct 2022 12:34:36 +0000
Received: by outflank-mailman (input) for mailman id 432828;
 Mon, 31 Oct 2022 12:34:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzv-0001Ty-6v
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzv-0000yY-6H
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opTzv-0001w1-5S
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=XRRpG4h7kGjxuBCuJxagFE7NADWMZ2OqjMCfoy0QdyU=; b=6elJjwEUpMCba5ksnAqmgT+f8b
	4iP/PptEhBkF3CexUjOm/CPdXLWyZDNqX/NW+esFfgG7cz8f66JG32ynyl1M2XQlxilx1QTgzbHuH
	JGujhEKYWKl1ZVdR5J9FPPZW2EArpKOEXUYVZSiXImO5A2RKpoAHnv8eaBqjr0S6OouQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86: also zap secondary time area handles during soft reset
Message-Id: <E1opTzv-0001w1-5S@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:34:35 +0000

commit aac108509055e5f5ff293e1fb44614f96a0996c6
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:26:08 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:26:08 2022 +0100

    x86: also zap secondary time area handles during soft reset
    
    Just like domain_soft_reset() properly zaps runstate area handles, the
    secondary time area ones also need discarding to prevent guest memory
    corruption once the guest is re-started.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: b80d4f8d2ea6418e32fb4f20d1304ace6d6566e3
    master date: 2022-10-27 11:49:09 +0200
---
 xen/arch/x86/domain.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a4356893bd..3fab2364be 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -929,6 +929,7 @@ int arch_domain_soft_reset(struct domain *d)
     struct page_info *page = virt_to_page(d->shared_info), *new_page;
     int ret = 0;
     struct domain *owner;
+    struct vcpu *v;
     mfn_t mfn;
     gfn_t gfn;
     p2m_type_t p2mt;
@@ -1008,7 +1009,12 @@ int arch_domain_soft_reset(struct domain *d)
                "Failed to add a page to replace %pd's shared_info frame %"PRI_gfn"\n",
                d, gfn_x(gfn));
         free_domheap_page(new_page);
+        goto exit_put_gfn;
     }
+
+    for_each_vcpu ( d, v )
+        set_xen_guest_handle(v->arch.time_info_guest, NULL);
+
  exit_put_gfn:
     put_gfn(d, gfn_x(gfn));
  exit_put_page:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:34:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432829.685478 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU07-0001Ww-0r; Mon, 31 Oct 2022 12:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432829.685478; Mon, 31 Oct 2022 12:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU06-0001Wp-Ue; Mon, 31 Oct 2022 12:34:46 +0000
Received: by outflank-mailman (input) for mailman id 432829;
 Mon, 31 Oct 2022 12:34:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU05-0001WQ-9m
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU05-0000yc-96
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU05-0001wq-8R
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=MtxGdGTVqHi8V9XJbA17u4Ib2sZBWSTgdY9VDU++g1c=; b=Dk8f4DgS4fseeJHVcX+XzhdDPq
	FVKGhM/ms5EHE/ovXwscgbUDlHhXGelX+zvPYF29RpakXvVV65j0CjLW+sVmh5cKt+y7nd4cHSM3W
	WPhgkDNCFWY/Dme4xL+Nx79/nG786BGgRVo1qz7dSfMuSMEcdEKCKJ9JOOnXDXRAd6vo=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] common: map_vcpu_info() wants to unshare the underlying page
Message-Id: <E1opU05-0001wq-8R@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:34:45 +0000

commit 426a8346c01075ec5eba4aadefab03a96b6ece6a
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:26:33 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:26:33 2022 +0100

    common: map_vcpu_info() wants to unshare the underlying page
    
    Not passing P2M_UNSHARE to get_page_from_gfn() means there won't even be
    an attempt to unshare the referenced page, without any indication to the
    caller (e.g. -EAGAIN). Note that guests have no direct control over
    which of their pages are shared (or paged out), and hence they have no
    way to make sure all on their own that the subsequent obtaining of a
    writable type reference can actually succeed.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: 48980cf24d5cf41fd644600f99c753419505e735
    master date: 2022-10-28 11:38:32 +0200
---
 xen/common/domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 56d47dd664..e3afcacb6c 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1471,7 +1471,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     if ( (v != current) && !(v->pause_flags & VPF_down) )
         return -EINVAL;
 
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    page = get_page_from_gfn(d, gfn, NULL, P2M_UNSHARE);
     if ( !page )
         return -EINVAL;
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:34:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:34:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432830.685483 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU0H-0001Zp-2a; Mon, 31 Oct 2022 12:34:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432830.685483; Mon, 31 Oct 2022 12:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU0G-0001Zh-WA; Mon, 31 Oct 2022 12:34:56 +0000
Received: by outflank-mailman (input) for mailman id 432830;
 Mon, 31 Oct 2022 12:34:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0F-0001ZH-Cx
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0F-0000yg-CN
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0F-0001xh-BV
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:34:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=wnKE4x/PPNwje1VqRtcNk4QNZyHj+9jy38COJvZjGKo=; b=p+jNeKfabR7AmUdumpyRJt1/vg
	EqMLrFspZVZEGl/eJfR5RLQdhyC9nHlnZ1MmkIKJ1kfqqLs0iKSPzRjm2tW6s9dXTWs2W/o2SqXmY
	flEj33JdR3R/xQyvGDo4woRxyyuKbGNQSt5O/5fSSdoR7hb19ld+9gMd8MtPylh5uSkc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/pv-shim: correctly ignore empty onlining requests
Message-Id: <E1opU0F-0001xh-BV@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:34:55 +0000

commit 08f6c88405a4406cac5b90e8d9873258dc445006
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:26:59 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:26:59 2022 +0100

    x86/pv-shim: correctly ignore empty onlining requests
    
    Mem-op requests may have zero extents. Such requests need treating as
    no-ops. pv_shim_online_memory(), however, would have tried to take 2³²-1
    order-sized pages from its balloon list (to then populate them),
    typically ending when the entire set of ballooned pages of this order
    was consumed.
    
    Note that pv_shim_offline_memory() does not have such an issue.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9272225ca72801fd9fa5b268a2d1c5adebd19cd9
    master date: 2022-10-28 15:47:59 +0200
---
 xen/arch/x86/pv/shim.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index d9704121a7..4146ee3f9c 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -944,6 +944,9 @@ void pv_shim_online_memory(unsigned int nr, unsigned int order)
     struct page_info *page, *tmp;
     PAGE_LIST_HEAD(list);
 
+    if ( !nr )
+        return;
+
     spin_lock(&balloon_lock);
     page_list_for_each_safe ( page, tmp, &balloon )
     {
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:35:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:35:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432831.685486 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU0R-0001mI-41; Mon, 31 Oct 2022 12:35:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432831.685486; Mon, 31 Oct 2022 12:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU0R-0001mB-1K; Mon, 31 Oct 2022 12:35:07 +0000
Received: by outflank-mailman (input) for mailman id 432831;
 Mon, 31 Oct 2022 12:35:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0P-0001g0-GS
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:35:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0P-0000z4-FL
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:35:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0P-0001zB-EY
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:35:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=bzjy+CtKSjcd+gw6Mk3s0EHNbyL3DK2AnT2YVUH5+wc=; b=fyj18Sltf6RjucxoKjD+9edwkO
	BHHIwuF0PyKbUs19G9nZxy6ucnABnFjxlwkMRelYtyjNOUKq8rfLeDMPQdn0lmL5Gwd4OP60Xn8Dd
	7hFOFn2lpwgm8DlPgwBHRoVPIycuYwzLgilcGNO8sLSuQA5OyirUGimT+EMOwuMNkOYw=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/pv-shim: correct ballooning up for compat guests
Message-Id: <E1opU0P-0001zB-EY@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:35:05 +0000

commit 2f75e3654f00a62bd1f446a7424ccd56750a2e15
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:28:15 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:28:15 2022 +0100

    x86/pv-shim: correct ballooning up for compat guests
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. It's only that case when the function would
    issue a call to pv_shim_online_memory(), yet the range then covers only
    the first sub-range that results from the split.
    
    Address that breakage by making a complementary call to
    pv_shim_online_memory() in compat layer.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a0bfdd201ea12aa5679bb8944d63a4e0d3c23160
    master date: 2022-10-28 15:48:50 +0200
---
 xen/common/compat/memory.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index c43fa97cf1..a0e0562a40 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -7,6 +7,7 @@ EMIT_FILE;
 #include <xen/event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
+#include <asm/guest.h>
 #include <compat/memory.h>
 
 #define xen_domid_t domid_t
@@ -146,7 +147,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 nat.rsrv->nr_extents = end_extent;
                 ++split;
             }
-
+           /* Avoid calling pv_shim_online_memory() when in a continuation. */
+           if ( pv_shim && op != XENMEM_decrease_reservation && !start_extent )
+               pv_shim_online_memory(cmp.rsrv.nr_extents - nat.rsrv->nr_extents,
+                                     cmp.rsrv.extent_order);
             break;
 
         case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:35:17 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432832.685490 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU0b-0001pE-5b; Mon, 31 Oct 2022 12:35:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432832.685490; Mon, 31 Oct 2022 12:35:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU0b-0001p6-2q; Mon, 31 Oct 2022 12:35:17 +0000
Received: by outflank-mailman (input) for mailman id 432832;
 Mon, 31 Oct 2022 12:35:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0Z-0001om-Iq
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:35:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0Z-0000zF-IH
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:35:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU0Z-000209-Hb
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:35:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=TIyjMt3L8qo9iEmwR9kiWmqW7ehG6XY/qmXhcyk99Wo=; b=PMgQ4c24O7A8KeJa/uvKHTMFoh
	WAMGwTNRShw/YIu966o3/teTCLEIlE5evLZWHVcRIVLr/ob8rOCxsBM58JKoM2OxamtsAsUZHz59o
	5pQw9bFMyADzViF3cdgqn3/WSg/XteAJma2vj2eE3LpyLWrFZCaUD81doKj4fvSkDT/A=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.16] x86/pv-shim: correct ballooning down for compat guests
Message-Id: <E1opU0Z-000209-Hb@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:35:15 +0000

commit c229b16ba3eb5579a9a5d470ab16dd9ad55e57d6
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:28:46 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:28:46 2022 +0100

    x86/pv-shim: correct ballooning down for compat guests
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. In order to be usable as overall result, the
    function accumulates args.nr_done, i.e. it initialized the field with
    the start extent. Therefore non-initial requests resulting from the
    split would pass too large a number into pv_shim_offline_memory().
    
    Address that breakage by always calling pv_shim_offline_memory()
    regardless of current hypercall preemption status, with a suitably
    adjusted first argument. Note that this is correct also for the native
    guest case: We now simply "commit" what was completed right away, rather
    than at the end of a series of preemption/re-start cycles. In fact this
    improves overall preemption behavior: There's no longer a potentially
    big chunk of work done non-preemptively at the end of the last
    "iteration".
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 1d7fbc535d1d37bdc2cc53ede360b0f6651f7de1
    master date: 2022-10-28 15:49:33 +0200
---
 xen/common/memory.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 064de4ad8d..76f8858cc3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1420,22 +1420,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         rc = args.nr_done;
 
-        if ( args.preempted )
-            return hypercall_create_continuation(
-                __HYPERVISOR_memory_op, "lh",
-                op | (rc << MEMOP_EXTENT_SHIFT), arg);
-
 #ifdef CONFIG_X86
         if ( pv_shim && op == XENMEM_decrease_reservation )
-            /*
-             * Only call pv_shim_offline_memory when the hypercall has
-             * finished. Note that nr_done is used to cope in case the
-             * hypercall has failed and only part of the extents where
-             * processed.
-             */
-            pv_shim_offline_memory(args.nr_done, args.extent_order);
+            pv_shim_offline_memory(args.nr_done - start_extent,
+                                   args.extent_order);
 #endif
 
+        if ( args.preempted )
+           return hypercall_create_continuation(
+                __HYPERVISOR_memory_op, "lh",
+                op | (rc << MEMOP_EXTENT_SHIFT), arg);
+
         break;
 
     case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:44:09 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432833.685495 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU98-0002mf-Ps; Mon, 31 Oct 2022 12:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432833.685495; Mon, 31 Oct 2022 12:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU98-0002mX-N5; Mon, 31 Oct 2022 12:44:06 +0000
Received: by outflank-mailman (input) for mailman id 432833;
 Mon, 31 Oct 2022 12:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU96-0002mR-UN
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU96-00018J-Sf
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:04 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU96-0002cn-Rf
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:04 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ZG7F9czhEbl6mNVtSbNoMGGPJkctli2U2qZjSwrIgBA=; b=5rousAfynWHtfyT7xhp3Y2Cj1e
	pOsWgWagJ8ASNY7qMZbG76dLSsFIl9RCzFkKgZEQqR/TYhP/7BAqLI1+m+6fCRuKIqBJLAZOdXrcI
	HRuAm2NS0g7MD1N29XBeUcKAbGqelU9sjoGQR0WJh6KW1MFCVPNFsopNHMzjmvH1MO+E=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] VMX: correct error handling in vmx_create_vmcs()
Message-Id: <E1opU96-0002cn-Rf@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:44:04 +0000

commit 3885fa42349c3c6f31f0e0eec3b4605dca7fdda9
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:31:26 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:31:26 2022 +0100

    VMX: correct error handling in vmx_create_vmcs()
    
    With the addition of vmx_add_msr() calls to construct_vmcs() there are
    now cases where simply freeing the VMCS isn't enough: The MSR bitmap
    page as well as one of the MSR area ones (if it's the 2nd vmx_add_msr()
    which fails) may also need freeing. Switch to using vmx_destroy_vmcs()
    instead.
    
    Fixes: 3bd36952dab6 ("x86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests")
    Fixes: 53a570b28569 ("x86/spec-ctrl: Support IBPB-on-entry")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    master commit: 448d28309f1a966bdc850aff1a637e0b79a03e43
    master date: 2022-10-12 17:57:56 +0200
---
 xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index dd817cee4e..237b13459d 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1831,7 +1831,7 @@ int vmx_create_vmcs(struct vcpu *v)
 
     if ( (rc = construct_vmcs(v)) != 0 )
     {
-        vmx_free_vmcs(vmx->vmcs_pa);
+        vmx_destroy_vmcs(v);
         return rc;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:44:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432834.685499 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9I-0002pM-Tn; Mon, 31 Oct 2022 12:44:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432834.685499; Mon, 31 Oct 2022 12:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9I-0002pE-RC; Mon, 31 Oct 2022 12:44:16 +0000
Received: by outflank-mailman (input) for mailman id 432834;
 Mon, 31 Oct 2022 12:44:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9H-0002ow-0Z
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9G-00018U-W3
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:14 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9G-0002dR-V2
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:14 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Mo+u/GorJGFNf+Zp8I/XLe0kVxcA9N7Zi4sE2GVX38o=; b=FW6LSJ/hDd4ObyGyvgFxJbi1Sg
	FM53vzsR5/Jqhg3dlz4Y0H+y0pl6nJg/Xf+4RDzgVhvVdvIk0cl3ZWgTSakVmAeY8h+W2NePB15NE
	cOReB16YCZGP+o595PaGN5wVs4wbxjR6LnGv8Ojm5aD5gWojUkmDPD6hq8Ds3AM4eQwY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] argo: Remove reachable ASSERT_UNREACHABLE
Message-Id: <E1opU9G-0002dR-V2@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:44:14 +0000

commit 916668baf9252ac30260e3394278a098712c5d34
Author:     Jason Andryuk <jandryuk@gmail.com>
AuthorDate: Mon Oct 31 13:32:59 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:32:59 2022 +0100

    argo: Remove reachable ASSERT_UNREACHABLE
    
    I observed this ASSERT_UNREACHABLE in partner_rings_remove consistently
    trip.  It was in OpenXT with the viptables patch applied.
    
    dom10 shuts down.
    dom7 is REJECTED sending to dom10.
    dom7 shuts down and this ASSERT trips for dom10.
    
    The argo_send_info has a domid, but there is no refcount taken on
    the domain.  Therefore it's not appropriate to ASSERT that the domain
    can be looked up via domid.  Replace with a debug message.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Christopher Clark <christopher.w.clark@gmail.com>
    master commit: 197f612b77c5afe04e60df2100a855370d720ad7
    master date: 2022-10-14 14:45:41 +0100
---
 xen/common/argo.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/argo.c b/xen/common/argo.c
index 49be715f63..2b0d980d4b 100644
--- a/xen/common/argo.c
+++ b/xen/common/argo.c
@@ -1299,7 +1299,8 @@ partner_rings_remove(struct domain *src_d)
                     ASSERT_UNREACHABLE();
             }
             else
-                ASSERT_UNREACHABLE();
+                argo_dprintk("%pd has entry for stale partner d%u\n",
+                             src_d, send_info->id.domain_id);
 
             if ( dst_d )
                 rcu_unlock_domain(dst_d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:44:27 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432836.685503 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9S-0002rw-VW; Mon, 31 Oct 2022 12:44:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432836.685503; Mon, 31 Oct 2022 12:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9S-0002ro-Sg; Mon, 31 Oct 2022 12:44:26 +0000
Received: by outflank-mailman (input) for mailman id 432836;
 Mon, 31 Oct 2022 12:44:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9R-0002rO-3i
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9R-00018v-30
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9R-0002du-27
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=+xX1WhtRqsoeT8A3P3+fBfTXvoEqV6z7Uid31sx7cSQ=; b=cp7TAiXkwaBXTofbHWyAiODbdT
	9LJIbS2kix93OpD5w3x+zK336unYirRLhulcGHYlAP1vn2pbVPaD+X71EGfbX+KQR3/GruwwfFJlg
	zUQbesRw3guKVyETES+fx+FIp1U1Vn3VH1mEjTp06I4Y1tBlo2a5aAfdUYBW0TFhDkTc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] EFI: don't convert memory marked for runtime use to ordinary RAM
Message-Id: <E1opU9R-0002du-27@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:44:25 +0000

commit b833014293f3fa5a7c48756ce0c8c9f3e4a666ff
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:33:29 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:33:29 2022 +0100

    EFI: don't convert memory marked for runtime use to ordinary RAM
    
    efi_init_memory() in both relevant places is treating EFI_MEMORY_RUNTIME
    higher priority than the type of the range. To avoid accessing memory at
    runtime which was re-used for other purposes, make
    efi_arch_process_memory_map() follow suit. While in theory the same would
    apply to EfiACPIReclaimMemory, we don't actually "reclaim" or clobber
    that memory (converted to E820_ACPI on x86) there (and it would be a bug
    if the Dom0 kernel tried to reclaim the range, bypassing Xen's memory
    management, plus it would be at least bogus if it clobbered that space),
    hence that type's handling can be left alone.
    
    Fixes: bf6501a62e80 ("x86-64: EFI boot code")
    Fixes: facac0af87ef ("x86-64: EFI runtime code")
    Fixes: 6d70ea10d49f ("Add ARM EFI boot support")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    master commit: f324300c8347b6aa6f9c0b18e0a90bbf44011a9a
    master date: 2022-10-21 12:30:24 +0200
---
 xen/arch/arm/efi/efi-boot.h | 3 ++-
 xen/arch/x86/efi/efi-boot.h | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index cf9c37153f..37d7ebd59a 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -149,7 +149,8 @@ static EFI_STATUS __init efi_process_memory_map_bootinfo(EFI_MEMORY_DESCRIPTOR *
 
     for ( Index = 0; Index < (mmap_size / desc_size); Index++ )
     {
-        if ( desc_ptr->Attribute & EFI_MEMORY_WB &&
+        if ( !(desc_ptr->Attribute & EFI_MEMORY_RUNTIME) &&
+             (desc_ptr->Attribute & EFI_MEMORY_WB) &&
              (desc_ptr->Type == EfiConventionalMemory ||
               desc_ptr->Type == EfiLoaderCode ||
               desc_ptr->Type == EfiLoaderData ||
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 84fd779314..3c3b3ab936 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -183,7 +183,9 @@ static void __init efi_arch_process_memory_map(EFI_SYSTEM_TABLE *SystemTable,
             /* fall through */
         case EfiLoaderCode:
         case EfiLoaderData:
-            if ( desc->Attribute & EFI_MEMORY_WB )
+            if ( desc->Attribute & EFI_MEMORY_RUNTIME )
+                type = E820_RESERVED;
+            else if ( desc->Attribute & EFI_MEMORY_WB )
                 type = E820_RAM;
             else
         case EfiUnusableMemory:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:44:37 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432837.685508 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9d-0002ur-0u; Mon, 31 Oct 2022 12:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432837.685508; Mon, 31 Oct 2022 12:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9c-0002uj-U9; Mon, 31 Oct 2022 12:44:36 +0000
Received: by outflank-mailman (input) for mailman id 432837;
 Mon, 31 Oct 2022 12:44:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9b-0002ua-6p
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9b-000196-66
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9b-0002eN-5J
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=UufZw4ritVEeEkwrsScmn0aAYNPI0IBQ3csD1EvIt04=; b=sn4mnRbpf8SfPNFUnCZ8juA5/r
	vl2R2aVqJ44crWQZyUEnXrQl8P5J6V95umuRNWUc9X0vRoNaRWL1k1N1sWZ5bnWJrxqeH+cQyHgeF
	sDMP5aLjPFt4NdaZwDpRmyU33ZVQUAGkIpqlY7J4N3eBWNBLZTesss416WaAVjt/JxIU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/sched: fix race in RTDS scheduler
Message-Id: <E1opU9b-0002eN-5J@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:44:35 +0000

commit 1f679f084fef76810762ee69a584fc1b524be0b6
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon Oct 31 13:33:59 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:33:59 2022 +0100

    xen/sched: fix race in RTDS scheduler
    
    When a domain gets paused the unit runnable state can change to "not
    runnable" without the scheduling lock being involved. This means that
    a specific scheduler isn't involved in this change of runnable state.
    
    In the RTDS scheduler this can result in an inconsistency in case a
    unit is losing its "runnable" capability while the RTDS scheduler's
    scheduling function is active. RTDS will remove the unit from the run
    queue, but doesn't do so for the replenish queue, leading to hitting
    an ASSERT() in replq_insert() later when the domain is unpaused again.
    
    Fix that by removing the unit from the replenish queue as well in this
    case.
    
    Fixes: 7c7b407e7772 ("xen/sched: introduce unit_runnable_state()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    master commit: 73c62927f64ecb48f27d06176befdf76b879f340
    master date: 2022-10-21 12:32:23 +0200
---
 xen/common/sched/rt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index c24cd2ac32..ec2ca1bebc 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -1087,6 +1087,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit,
         else if ( !unit_runnable_state(snext->unit) )
         {
             q_remove(snext);
+            replq_remove(ops, snext);
             snext = rt_unit(sched_idle_unit(sched_cpu));
         }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:44:47 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432838.685511 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9n-0002xO-2M; Mon, 31 Oct 2022 12:44:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432838.685511; Mon, 31 Oct 2022 12:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9m-0002xH-Vj; Mon, 31 Oct 2022 12:44:46 +0000
Received: by outflank-mailman (input) for mailman id 432838;
 Mon, 31 Oct 2022 12:44:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9l-0002x2-AB
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9l-00019G-9S
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9l-0002em-8c
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=C5ntwkdGIj+gmebMtb84iLMcp/8J1kww1X11t4dHLUw=; b=njBtH8QH4qoc1hVlB+yWpvdgaz
	9NQyICXjzB9SodShNkOahyUH0o4/HxmRnjtBERq1oqnwuJywBovO28H69Hgj/J7XtEEmjVwRkmuWk
	Np/zM7HU3kKqxmaM4P7r/a/n12sTPDxQhtuj5+wtnYMj8XSfk1rtIa7BiIJthXWfrM9Q=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] xen/sched: fix restore_vcpu_affinity() by removing it
Message-Id: <E1opU9l-0002em-8c@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:44:45 +0000

commit 9c5114696c6f7773b7f3691f27aaa7a0636c916d
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon Oct 31 13:34:28 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:34:28 2022 +0100

    xen/sched: fix restore_vcpu_affinity() by removing it
    
    When the system is coming up after having been suspended,
    restore_vcpu_affinity() is called for each domain in order to adjust
    the vcpu's affinity settings in case a cpu didn't come to live again.
    
    The way restore_vcpu_affinity() is doing that is wrong, because the
    specific scheduler isn't being informed about a possible migration of
    the vcpu to another cpu. Additionally the migration is often even
    happening if all cpus are running again, as it is done without check
    whether it is really needed.
    
    As cpupool management is already calling cpu_disable_scheduler() for
    cpus not having come up again, and cpu_disable_scheduler() is taking
    care of eventually needed vcpu migration in the proper way, there is
    simply no need for restore_vcpu_affinity().
    
    So just remove restore_vcpu_affinity() completely, together with the
    no longer used sched_reset_affinity_broken().
    
    Fixes: 8a04eaa8ea83 ("xen/sched: move some per-vcpu items to struct sched_unit")
    Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    master commit: fce1f381f7388daaa3e96dbb0d67d7a3e4bb2d2d
    master date: 2022-10-24 11:16:27 +0100
---
 xen/arch/x86/acpi/power.c |  3 --
 xen/common/sched/core.c   | 78 -----------------------------------------------
 xen/include/xen/sched.h   |  1 -
 3 files changed, 82 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index dd397f7130..1a7baeebe6 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -159,10 +159,7 @@ static void thaw_domains(void)
 
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
-    {
-        restore_vcpu_affinity(d);
         domain_unpause(d);
-    }
     rcu_read_unlock(&domlist_read_lock);
 }
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 900aab8f66..9173cf690c 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1188,84 +1188,6 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit)
     return false;
 }
 
-static void sched_reset_affinity_broken(const struct sched_unit *unit)
-{
-    struct vcpu *v;
-
-    for_each_sched_unit_vcpu ( unit, v )
-        v->affinity_broken = false;
-}
-
-void restore_vcpu_affinity(struct domain *d)
-{
-    unsigned int cpu = smp_processor_id();
-    struct sched_unit *unit;
-
-    ASSERT(system_state == SYS_STATE_resume);
-
-    rcu_read_lock(&sched_res_rculock);
-
-    for_each_sched_unit ( d, unit )
-    {
-        spinlock_t *lock;
-        unsigned int old_cpu = sched_unit_master(unit);
-        struct sched_resource *res;
-
-        ASSERT(!unit_runnable(unit));
-
-        /*
-         * Re-assign the initial processor as after resume we have no
-         * guarantee the old processor has come back to life again.
-         *
-         * Therefore, here, before actually unpausing the domains, we should
-         * set v->processor of each of their vCPUs to something that will
-         * make sense for the scheduler of the cpupool in which they are in.
-         */
-        lock = unit_schedule_lock_irq(unit);
-
-        cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                    cpupool_domain_master_cpumask(d));
-        if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-        {
-            if ( sched_check_affinity_broken(unit) )
-            {
-                sched_set_affinity(unit, unit->cpu_hard_affinity_saved, NULL);
-                sched_reset_affinity_broken(unit);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-
-            if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-            {
-                /* Affinity settings of one vcpu are for the complete unit. */
-                printk(XENLOG_DEBUG "Breaking affinity for %pv\n",
-                       unit->vcpu_list);
-                sched_set_affinity(unit, &cpumask_all, NULL);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-        }
-
-        res = get_sched_res(cpumask_any(cpumask_scratch_cpu(cpu)));
-        sched_set_res(unit, res);
-
-        spin_unlock_irq(lock);
-
-        /* v->processor might have changed, so reacquire the lock. */
-        lock = unit_schedule_lock_irq(unit);
-        res = sched_pick_resource(unit_scheduler(unit), unit);
-        sched_set_res(unit, res);
-        spin_unlock_irq(lock);
-
-        if ( old_cpu != sched_unit_master(unit) )
-            sched_move_irqs(unit);
-    }
-
-    rcu_read_unlock(&sched_res_rculock);
-
-    domain_update_node_affinity(d);
-}
-
 /*
  * This function is used by cpu_hotplug code via cpu notifier chain
  * and from cpupools to switch schedulers on a cpu.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 4e25627d96..bb05d167ae 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -993,7 +993,6 @@ void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:44:57 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:44:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432839.685514 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9x-0002zj-3r; Mon, 31 Oct 2022 12:44:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432839.685514; Mon, 31 Oct 2022 12:44:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opU9x-0002zc-1B; Mon, 31 Oct 2022 12:44:57 +0000
Received: by outflank-mailman (input) for mailman id 432839;
 Mon, 31 Oct 2022 12:44:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9v-0002zQ-Db
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9v-00019Q-Co
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opU9v-0002fK-Bo
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:44:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=XGJBPJtZv2ZDt8/0ju4qeHEFrQBN7QjzZGw/BgU45cM=; b=ZXFpni3ydBsNRaSl32dIGGEfij
	wxYmZpETE8EYWrTd7w9fhAd6c1c9XdM7en4r/V00rmpxl4NrNv3JRacUgMlLcub0dOO0Xe2fEhjko
	RO9YEBqbCLxgzsAAUJrerazdJWacR250wzDfRyGIevE1KAvK/YMHfqygNpr4qcgxfpA4=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/shadow: drop (replace) bogus assertions
Message-Id: <E1opU9v-0002fK-Bo@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:44:55 +0000

commit 08bc78b4eecaef33250038f7e484bdf01ea1017c
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:35:06 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:35:06 2022 +0100

    x86/shadow: drop (replace) bogus assertions
    
    The addition of a call to shadow_blow_tables() from shadow_teardown()
    has resulted in the "no vcpus" related assertion becoming triggerable:
    If domain_create() fails with at least one page successfully allocated
    in the course of shadow_enable(), or if domain_create() succeeds and
    the domain is then killed without ever invoking XEN_DOMCTL_max_vcpus.
    Note that in-tree tests (test-resource and test-tsx) do exactly the
    latter of these two.
    
    The assertion's comment was bogus anyway: Shadow mode has been getting
    enabled before allocation of vCPU-s for quite some time. Convert the
    assertion to a conditional: As long as there are no vCPU-s, there's
    nothing to blow away.
    
    Fixes: e7aa55c0aab3 ("x86/p2m: free the paging memory pool preemptively")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    
    A similar assertion/comment pair exists in _shadow_prealloc(); the
    comment is similarly bogus, and the assertion could in principle trigger
    e.g. when shadow_alloc_p2m_page() is called early enough. Replace those
    at the same time by a similar early return, here indicating failure to
    the caller (which will generally lead to the domain being crashed in
    shadow_prealloc()).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a92dc2bb30ba65ae25d2f417677eb7ef9a6a0fef
    master date: 2022-10-24 15:46:11 +0200
---
 xen/arch/x86/mm/shadow/common.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 8f7fddcee1..e36d49d1fc 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -942,8 +942,9 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
         /* No reclaim when the domain is dying, teardown will take care of it. */
         return false;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to reclaim when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return false;
 
     /* Stage one: walk the list of pinned pages, unpinning them */
     perfc_incr(shadow_prealloc_1);
@@ -1033,8 +1034,9 @@ void shadow_blow_tables(struct domain *d)
     mfn_t smfn;
     int i;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to do when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return;
 
     /* Pass one: unpin all pinned pages */
     foreach_pinned_shadow(d, sp, t)
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:45:06 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432840.685519 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUA6-00036I-5y; Mon, 31 Oct 2022 12:45:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432840.685519; Mon, 31 Oct 2022 12:45:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUA6-00035s-35; Mon, 31 Oct 2022 12:45:06 +0000
Received: by outflank-mailman (input) for mailman id 432840;
 Mon, 31 Oct 2022 12:45:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUA5-00032x-Gi
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUA5-0001A1-Fy
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:05 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUA5-0002gi-FE
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:05 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=WaVcgLk9NKEh22OgQWGEQvrKwmKymbwJb91+mmYT2sw=; b=sIyRT/I819qLe30u1tZzmZzFuY
	g8+qn+0AdZiKDyuQkpyceBYYT4i9wHbB9tN7JaUBad0Rs5KbcpcVmbMiGhsW/66aNarONnycmO4ow
	g+dQR6Gd9KF7mRhkmevlmcQvVPbfCUDYupzSzXiFaCz8j5xV3JgrwipvGh5fX4AhKhEM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] vpci: don't assume that vpci per-device data exists unconditionally
Message-Id: <E1opUA5-0002gi-FE@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:45:05 +0000

commit 6b035f4f5829eb213cb9fcbe83b5dfae05c857a6
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Mon Oct 31 13:35:33 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:35:33 2022 +0100

    vpci: don't assume that vpci per-device data exists unconditionally
    
    It's possible for a device to be assigned to a domain but have no
    vpci structure if vpci_process_pending() failed and called
    vpci_remove_device() as a result.  The unconditional accesses done by
    vpci_{read,write}() and vpci_remove_device() to pdev->vpci would
    then trigger a NULL pointer dereference.
    
    Add checks for pdev->vpci presence in the affected functions.
    
    Fixes: 9c244fdef7 ('vpci: add header handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6ccb5e308ceeb895fbccd87a528a8bd24325aa39
    master date: 2022-10-26 14:55:30 +0200
---
 xen/drivers/vpci/vpci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index a27c9e600d..6b90e4fa32 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -37,6 +37,9 @@ extern vpci_register_init_t *const __end_vpci_array[];
 
 void vpci_remove_device(struct pci_dev *pdev)
 {
+    if ( !pdev->vpci )
+        return;
+
     spin_lock(&pdev->vpci->lock);
     while ( !list_empty(&pdev->vpci->handlers) )
     {
@@ -320,7 +323,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
 
     /* Find the PCI dev matching the address. */
     pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
         return vpci_read_hw(sbdf, reg, size);
 
     spin_lock(&pdev->vpci->lock);
@@ -430,7 +433,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
      * Passthrough everything that's not trapped.
      */
     pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
     {
         vpci_write_hw(sbdf, reg, size, data);
         return;
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:45:16 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432843.685523 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAG-0003EO-7X; Mon, 31 Oct 2022 12:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432843.685523; Mon, 31 Oct 2022 12:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAG-0003EE-4i; Mon, 31 Oct 2022 12:45:16 +0000
Received: by outflank-mailman (input) for mailman id 432843;
 Mon, 31 Oct 2022 12:45:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAF-0003E6-JW
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAF-0001AC-Ix
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:15 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAF-0002ha-IB
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:15 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=b+ESV3NH5CCxG+cdp+Us58j9zUENggOZNxGgtaSJduc=; b=M4yJgeeEmAfVjkC6C0Aa55r9fE
	3CuOPg+vEoyJxC9px25ANOWaumwkE1XFnVYy93eovlGWg0ZPW7haJK2Y1Z0gg8cjAH+h4YgmdYTDM
	DujyC2oc+oI5I9R5CEzI9Xqja/rleysafrjRmJbkcxFZOatxVJyp3zQfdIUSs8GE4/Ko=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] vpci/msix: remove from table list on detach
Message-Id: <E1opUAF-0002ha-IB@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:45:15 +0000

commit bff4c4457950abb498270d921d728f654876f944
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Mon Oct 31 13:35:59 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:35:59 2022 +0100

    vpci/msix: remove from table list on detach
    
    Teardown of MSIX vPCI related data doesn't currently remove the MSIX
    device data from the list of MSIX tables handled by the domain,
    leading to a use-after-free of the data in the msix structure.
    
    Remove the structure from the list before freeing in order to solve
    it.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Fixes: d6281be9d0 ('vpci/msix: add MSI-X handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c14aea137eab29eb9c30bfad745a00c65ad21066
    master date: 2022-10-26 14:56:58 +0200
---
 xen/drivers/vpci/vpci.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 6b90e4fa32..75edbbee40 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -51,8 +51,12 @@ void vpci_remove_device(struct pci_dev *pdev)
         xfree(r);
     }
     spin_unlock(&pdev->vpci->lock);
-    if ( pdev->vpci->msix && pdev->vpci->msix->pba )
-        iounmap(pdev->vpci->msix->pba);
+    if ( pdev->vpci->msix )
+    {
+        list_del(&pdev->vpci->msix->next);
+        if ( pdev->vpci->msix->pba )
+            iounmap(pdev->vpci->msix->pba);
+    }
     xfree(pdev->vpci->msix);
     xfree(pdev->vpci->msi);
     xfree(pdev->vpci);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:45:26 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432845.685527 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAQ-0003Ie-AU; Mon, 31 Oct 2022 12:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432845.685527; Mon, 31 Oct 2022 12:45:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAQ-0003IS-7Y; Mon, 31 Oct 2022 12:45:26 +0000
Received: by outflank-mailman (input) for mailman id 432845;
 Mon, 31 Oct 2022 12:45:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAP-0003IK-Mr
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAP-0001Af-M9
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:25 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAP-0002i1-LC
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:25 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=WI6xhBC6hxxrGy4nzTPAa+jLigTbxx2PlyDLIskjlto=; b=0qUKArE00e1KFAVCKiPsyX4wFi
	Kku0zdle2rW9/LZBJRQIZPodkxLQcF6bilVYsw+RlKHU1COhAl58dFIpeZ1mOQ6TvA3jdJt3PE/Fl
	14o0ERWY/hZObiAQhXTSddILQyw/q+QXFFBYO4ZdYKsfdV2FHG848xbkgg6O2SekfMmc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86: also zap secondary time area handles during soft reset
Message-Id: <E1opUAP-0002i1-LC@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:45:25 +0000

commit 9b8b65c827169eca2d0e500150009ac0f857d455
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:36:25 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:36:25 2022 +0100

    x86: also zap secondary time area handles during soft reset
    
    Just like domain_soft_reset() properly zaps runstate area handles, the
    secondary time area ones also need discarding to prevent guest memory
    corruption once the guest is re-started.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: b80d4f8d2ea6418e32fb4f20d1304ace6d6566e3
    master date: 2022-10-27 11:49:09 +0200
---
 xen/arch/x86/domain.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index ce6ddcf313..e9b8ed4c96 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -927,6 +927,7 @@ int arch_domain_soft_reset(struct domain *d)
     struct page_info *page = virt_to_page(d->shared_info), *new_page;
     int ret = 0;
     struct domain *owner;
+    struct vcpu *v;
     mfn_t mfn;
     gfn_t gfn;
     p2m_type_t p2mt;
@@ -1006,7 +1007,12 @@ int arch_domain_soft_reset(struct domain *d)
                "Failed to add a page to replace %pd's shared_info frame %"PRI_gfn"\n",
                d, gfn_x(gfn));
         free_domheap_page(new_page);
+        goto exit_put_gfn;
     }
+
+    for_each_vcpu ( d, v )
+        set_xen_guest_handle(v->arch.time_info_guest, NULL);
+
  exit_put_gfn:
     put_gfn(d, gfn_x(gfn));
  exit_put_page:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:45:36 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:45:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432846.685531 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAa-0003MF-C2; Mon, 31 Oct 2022 12:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432846.685531; Mon, 31 Oct 2022 12:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAa-0003M8-93; Mon, 31 Oct 2022 12:45:36 +0000
Received: by outflank-mailman (input) for mailman id 432846;
 Mon, 31 Oct 2022 12:45:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAZ-0003M2-Pa
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAZ-0001Ap-Ov
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:35 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAZ-0002iZ-ON
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:35 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=r4gvFaPSb0t0TlR+n3+yGlIJHFw8oBOS/bszSq5a9cY=; b=xzPoO3/MsnLA8kAbvXokeB7e4p
	2EHLhJi28C9BiWiek3muCJYdZOggR0W2YsjxkWsEMyOqkGcsbgdzYFw/BXpkD+sLHZiAdCwu/EPaE
	krAOx6cb6ljO4mkDnTiZKuwoqA0xzK3EPDvUCpFpJ0NUiSmhuBazKrA7i4+GWhryY7z8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] common: map_vcpu_info() wants to unshare the underlying page
Message-Id: <E1opUAZ-0002iZ-ON@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:45:35 +0000

commit 317894fa6a067a7903199bc5c1e3e06a0436caf8
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:36:50 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:36:50 2022 +0100

    common: map_vcpu_info() wants to unshare the underlying page
    
    Not passing P2M_UNSHARE to get_page_from_gfn() means there won't even be
    an attempt to unshare the referenced page, without any indication to the
    caller (e.g. -EAGAIN). Note that guests have no direct control over
    which of their pages are shared (or paged out), and hence they have no
    way to make sure all on their own that the subsequent obtaining of a
    writable type reference can actually succeed.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: 48980cf24d5cf41fd644600f99c753419505e735
    master date: 2022-10-28 11:38:32 +0200
---
 xen/common/domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 17cc32fde3..0fb7f9a622 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1454,7 +1454,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     if ( (v != current) && !(v->pause_flags & VPF_down) )
         return -EINVAL;
 
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    page = get_page_from_gfn(d, gfn, NULL, P2M_UNSHARE);
     if ( !page )
         return -EINVAL;
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:45:46 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:45:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432847.685535 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAk-0003P3-DH; Mon, 31 Oct 2022 12:45:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432847.685535; Mon, 31 Oct 2022 12:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAk-0003Ow-Ah; Mon, 31 Oct 2022 12:45:46 +0000
Received: by outflank-mailman (input) for mailman id 432847;
 Mon, 31 Oct 2022 12:45:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAj-0003On-SY
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAj-0001B1-Rw
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:45 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAj-0002j8-RC
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:45 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=b4REODAGz9Dkibq7zSG24/f4/XC5PB67yMSl0bREs7M=; b=RrYV/iHhId8loUYRruNKhRX4NW
	qOfk4cxtfREzmwe2ePyWPVyg+VEcFOzrj5U3Gbz+AAKrhZUEfcPXCHf+5RvwCz4aC7xHw1Uz6r+dJ
	GEKfbyDRLnnXtk0xvscw1kExXLJVPEUUUCkBuMA3VBDz0tjPF/rl/+NomViKrP11Cuoc=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/pv-shim: correctly ignore empty onlining requests
Message-Id: <E1opUAj-0002j8-RC@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:45:45 +0000

commit a46f01fad17173afe3809ac1980cbe4b67a9a8b5
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:37:17 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:37:17 2022 +0100

    x86/pv-shim: correctly ignore empty onlining requests
    
    Mem-op requests may have zero extents. Such requests need treating as
    no-ops. pv_shim_online_memory(), however, would have tried to take 2³²-1
    order-sized pages from its balloon list (to then populate them),
    typically ending when the entire set of ballooned pages of this order
    was consumed.
    
    Note that pv_shim_offline_memory() does not have such an issue.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9272225ca72801fd9fa5b268a2d1c5adebd19cd9
    master date: 2022-10-28 15:47:59 +0200
---
 xen/arch/x86/pv/shim.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index b4e83e0778..104357e2c3 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -922,6 +922,9 @@ void pv_shim_online_memory(unsigned int nr, unsigned int order)
     struct page_info *page, *tmp;
     PAGE_LIST_HEAD(list);
 
+    if ( !nr )
+        return;
+
     spin_lock(&balloon_lock);
     page_list_for_each_safe ( page, tmp, &balloon )
     {
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:45:56 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:45:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432848.685539 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAu-0003SX-FK; Mon, 31 Oct 2022 12:45:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432848.685539; Mon, 31 Oct 2022 12:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUAu-0003SP-CZ; Mon, 31 Oct 2022 12:45:56 +0000
Received: by outflank-mailman (input) for mailman id 432848;
 Mon, 31 Oct 2022 12:45:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAt-0003SJ-VT
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAt-0001BC-Ul
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:55 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUAt-0002jn-UA
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:45:55 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=WJIeg+HBPMMgHa9GosmfhlAQP7nrQEr4b5rto+xmxNg=; b=e3a9BWTZ2QhmY53Jri9fFkEDd+
	is31zjR6PJkfzXxGKURfI+3iNRvLyoQ7rlSJGEbHFBZU3FPiYY9oOs2ul2rUg0ewOzfGNGnfe9ykO
	e06pXw6X2v0y9kb8z/1PlGzTLzrWtnAsf/WeRv0i6xNVo2wEn1E8dINkrf2C9h3+1z1Y=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/pv-shim: correct ballooning up for compat guests
Message-Id: <E1opUAt-0002jn-UA@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:45:55 +0000

commit b68e3fda8a76fb3ab582b5633727ac5545e4e8b9
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:37:42 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:37:42 2022 +0100

    x86/pv-shim: correct ballooning up for compat guests
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. It's only that case when the function would
    issue a call to pv_shim_online_memory(), yet the range then covers only
    the first sub-range that results from the split.
    
    Address that breakage by making a complementary call to
    pv_shim_online_memory() in compat layer.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a0bfdd201ea12aa5679bb8944d63a4e0d3c23160
    master date: 2022-10-28 15:48:50 +0200
---
 xen/common/compat/memory.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index c43fa97cf1..a0e0562a40 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -7,6 +7,7 @@ EMIT_FILE;
 #include <xen/event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
+#include <asm/guest.h>
 #include <compat/memory.h>
 
 #define xen_domid_t domid_t
@@ -146,7 +147,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 nat.rsrv->nr_extents = end_extent;
                 ++split;
             }
-
+           /* Avoid calling pv_shim_online_memory() when in a continuation. */
+           if ( pv_shim && op != XENMEM_decrease_reservation && !start_extent )
+               pv_shim_online_memory(cmp.rsrv.nr_extents - nat.rsrv->nr_extents,
+                                     cmp.rsrv.extent_order);
             break;
 
         case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 12:46:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 12:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.432849.685543 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUB5-0003VI-H1; Mon, 31 Oct 2022 12:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 432849.685543; Mon, 31 Oct 2022 12:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opUB5-0003VA-ED; Mon, 31 Oct 2022 12:46:07 +0000
Received: by outflank-mailman (input) for mailman id 432849;
 Mon, 31 Oct 2022 12:46:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUB4-0003V2-2H
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:46:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUB4-0001Bn-1c
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:46:06 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opUB4-0002ke-0i
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 12:46:06 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=1AbvJK9CHvSF3CFysXn2/VtXV1IOwax08UbucAzr/aI=; b=yO6mAVc07H9Dx+IeDhQem+PPJc
	UwR6GHfdjhqWKBoUX6y2w8KhfY3cIwH1GNJLYtSnV44prb0EGzMOJW0AsKZ9LrT3fOfFHlSH5Nbt7
	K977SMHwL+zD28TX8kxiVN20NcVrzGHfJTbjvZzJRhIsZXyIFz55NXFL5XW0L4hjV6fY=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen staging-4.15] x86/pv-shim: correct ballooning down for compat guests
Message-Id: <E1opUB4-0002ke-0i@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 12:46:06 +0000

commit ddab5b1e001366258c0bfc7d5995b9d548e6042b
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:38:05 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:38:05 2022 +0100

    x86/pv-shim: correct ballooning down for compat guests
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. In order to be usable as overall result, the
    function accumulates args.nr_done, i.e. it initialized the field with
    the start extent. Therefore non-initial requests resulting from the
    split would pass too large a number into pv_shim_offline_memory().
    
    Address that breakage by always calling pv_shim_offline_memory()
    regardless of current hypercall preemption status, with a suitably
    adjusted first argument. Note that this is correct also for the native
    guest case: We now simply "commit" what was completed right away, rather
    than at the end of a series of preemption/re-start cycles. In fact this
    improves overall preemption behavior: There's no longer a potentially
    big chunk of work done non-preemptively at the end of the last
    "iteration".
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 1d7fbc535d1d37bdc2cc53ede360b0f6651f7de1
    master date: 2022-10-28 15:49:33 +0200
---
 xen/common/memory.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 95b2b934e4..a958d94ac3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1407,22 +1407,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         rc = args.nr_done;
 
-        if ( args.preempted )
-            return hypercall_create_continuation(
-                __HYPERVISOR_memory_op, "lh",
-                op | (rc << MEMOP_EXTENT_SHIFT), arg);
-
 #ifdef CONFIG_X86
         if ( pv_shim && op == XENMEM_decrease_reservation )
-            /*
-             * Only call pv_shim_offline_memory when the hypercall has
-             * finished. Note that nr_done is used to cope in case the
-             * hypercall has failed and only part of the extents where
-             * processed.
-             */
-            pv_shim_offline_memory(args.nr_done, args.extent_order);
+            pv_shim_offline_memory(args.nr_done - start_extent,
+                                   args.extent_order);
 #endif
 
+        if ( args.preempted )
+           return hypercall_create_continuation(
+                __HYPERVISOR_memory_op, "lh",
+                op | (rc << MEMOP_EXTENT_SHIFT), arg);
+
         break;
 
     case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.15


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:11:07 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:11:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433244.686159 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opczn-0004nE-Mm; Mon, 31 Oct 2022 22:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433244.686159; Mon, 31 Oct 2022 22:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opczn-0004n5-Ie; Mon, 31 Oct 2022 22:11:03 +0000
Received: by outflank-mailman (input) for mailman id 433244;
 Mon, 31 Oct 2022 22:11:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opczm-0004mz-53
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opczm-0002qb-4H
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opczm-00006T-3F
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=F+UobGpL9By18/XPvWYsXZ6igrIV5NaX0wVxE1cRLn4=; b=USYnhTCRsnszFE7/KDrQZ/4vSK
	KmR6TtK4rCA7rvDKbDVSA8eUH7wCCLg6c8VazUpWFmi+LLUoq/rVJBNiAniWbkHtWisA/UCSQsvB6
	zfYZsNZDNc/7WNB94GZ0PN3kXuq+URf3my7PZ3FDQBnO+MivaxVcfCRq6OoX285Ukl9A=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86emul: respect NSCB
Message-Id: <E1opczm-00006T-3F@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:11:02 +0000

commit 5dae06578cd5dcc312175b00ed6836a85732438d
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:19:35 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:19:35 2022 +0100

    x86emul: respect NSCB
    
    protmode_load_seg() would better adhere to that "feature" of clearing
    base (and limit) during NULL selector loads.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 87a20c98d9f0f422727fe9b4b9e22c2c43a5cd9c
    master date: 2022-10-11 14:30:41 +0200
---
 xen/arch/x86/x86_emulate/x86_emulate.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 441086ea86..847f8f3771 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1970,6 +1970,7 @@ amd_like(const struct x86_emulate_ctxt *ctxt)
 #define vcpu_has_tbm()         (ctxt->cpuid->extd.tbm)
 #define vcpu_has_clzero()      (ctxt->cpuid->extd.clzero)
 #define vcpu_has_wbnoinvd()    (ctxt->cpuid->extd.wbnoinvd)
+#define vcpu_has_nscb()        (ctxt->cpuid->extd.nscb)
 
 #define vcpu_has_bmi1()        (ctxt->cpuid->feat.bmi1)
 #define vcpu_has_hle()         (ctxt->cpuid->feat.hle)
@@ -2102,7 +2103,7 @@ protmode_load_seg(
         case x86_seg_tr:
             goto raise_exn;
         }
-        if ( !_amd_like(cp) || !ops->read_segment ||
+        if ( !_amd_like(cp) || vcpu_has_nscb() || !ops->read_segment ||
              ops->read_segment(seg, sreg, ctxt) != X86EMUL_OKAY )
             memset(sreg, 0, sizeof(*sreg));
         else
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:11:13 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:11:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433245.686161 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opczx-0004pG-Mu; Mon, 31 Oct 2022 22:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433245.686161; Mon, 31 Oct 2022 22:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opczx-0004p8-KG; Mon, 31 Oct 2022 22:11:13 +0000
Received: by outflank-mailman (input) for mailman id 433245;
 Mon, 31 Oct 2022 22:11:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opczw-0004os-8L
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opczw-0002qr-7b
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opczw-00006w-6j
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=QIJlW/HSLeDs2Jfph8BHtWzxEHRxOWbc8DAjbgroel4=; b=sX2KwmFjX1JqsWMOa6nec5pddW
	X/23RpRNErP1BRuaTmPih21GdFi+1w8p7UNKFhOqqVIyAHCWoubFz5jzfy2eeG5ASiG1/K1TdAeHc
	itMjbA7dkXbUHV1/Qd8aRK7pXfc7Wzo/Ipq4ISDvncdIp31p8c2PyAC7hOXLjHQB/ZJM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] VMX: correct error handling in vmx_create_vmcs()
Message-Id: <E1opczw-00006w-6j@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:11:12 +0000

commit 02ab5e97c41d275ccea0910b1d8bce41ed1be5bf
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:20:40 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:20:40 2022 +0100

    VMX: correct error handling in vmx_create_vmcs()
    
    With the addition of vmx_add_msr() calls to construct_vmcs() there are
    now cases where simply freeing the VMCS isn't enough: The MSR bitmap
    page as well as one of the MSR area ones (if it's the 2nd vmx_add_msr()
    which fails) may also need freeing. Switch to using vmx_destroy_vmcs()
    instead.
    
    Fixes: 3bd36952dab6 ("x86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests")
    Fixes: 53a570b28569 ("x86/spec-ctrl: Support IBPB-on-entry")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
    master commit: 448d28309f1a966bdc850aff1a637e0b79a03e43
    master date: 2022-10-12 17:57:56 +0200
---
 xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index dd817cee4e..237b13459d 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1831,7 +1831,7 @@ int vmx_create_vmcs(struct vcpu *v)
 
     if ( (rc = construct_vmcs(v)) != 0 )
     {
-        vmx_free_vmcs(vmx->vmcs_pa);
+        vmx_destroy_vmcs(v);
         return rc;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:11:23 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433246.686166 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd07-0004sF-Ob; Mon, 31 Oct 2022 22:11:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433246.686166; Mon, 31 Oct 2022 22:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd07-0004s7-Lk; Mon, 31 Oct 2022 22:11:23 +0000
Received: by outflank-mailman (input) for mailman id 433246;
 Mon, 31 Oct 2022 22:11:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd06-0004rw-BN
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd06-0002r3-Af
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:22 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd06-00007M-9l
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=fzwFcIvRZHFcR/6qc4/byMZJKcEbAg4kKjVsOqHvjjg=; b=HoGP8Qlo9gMVaaatiyFhyHYEzf
	tuzknwBKnojLj+5urj9pp9ZGYkWxhw5bZYL2HwXtvR2Dwn9+n+PjuvTYVhhctDaENUv02nK8vIa4a
	1Z7G/OToHrxHcc4Q9NqdDLPdckhv+H8Hl/MZcoEpPkpCH4d85knt32CelG3XkjM5LShs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] argo: Remove reachable ASSERT_UNREACHABLE
Message-Id: <E1opd06-00007M-9l@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:11:22 +0000

commit d4a11d6a22cf73ac7441750e5e8113779348885e
Author:     Jason Andryuk <jandryuk@gmail.com>
AuthorDate: Mon Oct 31 13:21:31 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:21:31 2022 +0100

    argo: Remove reachable ASSERT_UNREACHABLE
    
    I observed this ASSERT_UNREACHABLE in partner_rings_remove consistently
    trip.  It was in OpenXT with the viptables patch applied.
    
    dom10 shuts down.
    dom7 is REJECTED sending to dom10.
    dom7 shuts down and this ASSERT trips for dom10.
    
    The argo_send_info has a domid, but there is no refcount taken on
    the domain.  Therefore it's not appropriate to ASSERT that the domain
    can be looked up via domid.  Replace with a debug message.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Christopher Clark <christopher.w.clark@gmail.com>
    master commit: 197f612b77c5afe04e60df2100a855370d720ad7
    master date: 2022-10-14 14:45:41 +0100
---
 xen/common/argo.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/argo.c b/xen/common/argo.c
index eaea7ba888..80f3275092 100644
--- a/xen/common/argo.c
+++ b/xen/common/argo.c
@@ -1298,7 +1298,8 @@ partner_rings_remove(struct domain *src_d)
                     ASSERT_UNREACHABLE();
             }
             else
-                ASSERT_UNREACHABLE();
+                argo_dprintk("%pd has entry for stale partner d%u\n",
+                             src_d, send_info->id.domain_id);
 
             if ( dst_d )
                 rcu_unlock_domain(dst_d);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:11:33 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:11:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433247.686170 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0H-0004vV-RK; Mon, 31 Oct 2022 22:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433247.686170; Mon, 31 Oct 2022 22:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0H-0004vN-Oi; Mon, 31 Oct 2022 22:11:33 +0000
Received: by outflank-mailman (input) for mailman id 433247;
 Mon, 31 Oct 2022 22:11:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0G-0004vE-G8
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0G-0002ra-Ee
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:32 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0G-00007n-D4
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:32 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=IKU8GNn4v+dqzNmZP+pRAugb0JAJ38YIf0Vik/z/Cbk=; b=Q39hwMpdIuNOLOqK8gdeud2cs3
	IODSzxwCPcXWYE7FBPUytAevDxeiTILpjaXTLSJWjwVkeZdMpVoexDg79Iz0dv6Hn3BdciHwIUqK6
	NFDI4wZQwy+XzlqfJIoIr7zU34XrTBwMyvtbt8crT99wDzOEcIqd9lb/JzultxKDQNcs=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] EFI: don't convert memory marked for runtime use to ordinary RAM
Message-Id: <E1opd0G-00007n-D4@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:11:32 +0000

commit 54f8ed80c8308e65c3f57ae6cbd130f43f5ecbbd
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:22:17 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:22:17 2022 +0100

    EFI: don't convert memory marked for runtime use to ordinary RAM
    
    efi_init_memory() in both relevant places is treating EFI_MEMORY_RUNTIME
    higher priority than the type of the range. To avoid accessing memory at
    runtime which was re-used for other purposes, make
    efi_arch_process_memory_map() follow suit. While in theory the same would
    apply to EfiACPIReclaimMemory, we don't actually "reclaim" or clobber
    that memory (converted to E820_ACPI on x86) there (and it would be a bug
    if the Dom0 kernel tried to reclaim the range, bypassing Xen's memory
    management, plus it would be at least bogus if it clobbered that space),
    hence that type's handling can be left alone.
    
    Fixes: bf6501a62e80 ("x86-64: EFI boot code")
    Fixes: facac0af87ef ("x86-64: EFI runtime code")
    Fixes: 6d70ea10d49f ("Add ARM EFI boot support")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    master commit: f324300c8347b6aa6f9c0b18e0a90bbf44011a9a
    master date: 2022-10-21 12:30:24 +0200
---
 xen/arch/arm/efi/efi-boot.h | 3 ++-
 xen/arch/x86/efi/efi-boot.h | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index 9f26798239..849071fe53 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -194,7 +194,8 @@ static EFI_STATUS __init efi_process_memory_map_bootinfo(EFI_MEMORY_DESCRIPTOR *
 
     for ( Index = 0; Index < (mmap_size / desc_size); Index++ )
     {
-        if ( desc_ptr->Attribute & EFI_MEMORY_WB &&
+        if ( !(desc_ptr->Attribute & EFI_MEMORY_RUNTIME) &&
+             (desc_ptr->Attribute & EFI_MEMORY_WB) &&
              (desc_ptr->Type == EfiConventionalMemory ||
               desc_ptr->Type == EfiLoaderCode ||
               desc_ptr->Type == EfiLoaderData ||
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 4ee77fb9bf..d996016223 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -185,7 +185,9 @@ static void __init efi_arch_process_memory_map(EFI_SYSTEM_TABLE *SystemTable,
             /* fall through */
         case EfiLoaderCode:
         case EfiLoaderData:
-            if ( desc->Attribute & EFI_MEMORY_WB )
+            if ( desc->Attribute & EFI_MEMORY_RUNTIME )
+                type = E820_RESERVED;
+            else if ( desc->Attribute & EFI_MEMORY_WB )
                 type = E820_RAM;
             else
         case EfiUnusableMemory:
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:11:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433248.686176 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0R-0004yL-Tw; Mon, 31 Oct 2022 22:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433248.686176; Mon, 31 Oct 2022 22:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0R-0004yD-Q8; Mon, 31 Oct 2022 22:11:43 +0000
Received: by outflank-mailman (input) for mailman id 433248;
 Mon, 31 Oct 2022 22:11:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0Q-0004y3-Ly
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0Q-0002tI-Ib
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:42 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0Q-00008D-Gv
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:42 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=FxZUp1wmx5/zFH2DL76SeErkkvaQ6Y6dhwDLzbGcyqU=; b=pzJ2s2ZGbjOQYTJb3yGEyMGs34
	GbSqDidUA0qRu2EmKnqla3ymXpLD+yJ4Mqi9TCB3kMdP3Ks2QHHZioh/5O5cZZdwsoPQz8PR0Jyqy
	kT2aJ/WNAJAVEKdkE0lQQzyhvMH1bl93dSXKBMHEgID4n7chWckKas3q6dkP5uCjLTDU=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/sched: fix race in RTDS scheduler
Message-Id: <E1opd0Q-00008D-Gv@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:11:42 +0000

commit 481465f35da1bcec0b2a4dfd6fc51d86cac28547
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon Oct 31 13:22:54 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:22:54 2022 +0100

    xen/sched: fix race in RTDS scheduler
    
    When a domain gets paused the unit runnable state can change to "not
    runnable" without the scheduling lock being involved. This means that
    a specific scheduler isn't involved in this change of runnable state.
    
    In the RTDS scheduler this can result in an inconsistency in case a
    unit is losing its "runnable" capability while the RTDS scheduler's
    scheduling function is active. RTDS will remove the unit from the run
    queue, but doesn't do so for the replenish queue, leading to hitting
    an ASSERT() in replq_insert() later when the domain is unpaused again.
    
    Fix that by removing the unit from the replenish queue as well in this
    case.
    
    Fixes: 7c7b407e7772 ("xen/sched: introduce unit_runnable_state()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    master commit: 73c62927f64ecb48f27d06176befdf76b879f340
    master date: 2022-10-21 12:32:23 +0200
---
 xen/common/sched/rt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index c24cd2ac32..ec2ca1bebc 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -1087,6 +1087,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit,
         else if ( !unit_runnable_state(snext->unit) )
         {
             q_remove(snext);
+            replq_remove(ops, snext);
             snext = rt_unit(sched_idle_unit(sched_cpu));
         }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:11:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433249.686179 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0b-00050y-Vn; Mon, 31 Oct 2022 22:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433249.686179; Mon, 31 Oct 2022 22:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0b-00050q-Rp; Mon, 31 Oct 2022 22:11:53 +0000
Received: by outflank-mailman (input) for mailman id 433249;
 Mon, 31 Oct 2022 22:11:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0a-00050f-ML
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0a-0002tW-Lf
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:52 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0a-00008g-Ky
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:11:52 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=4xRB/ViX5D7E17wcrRrXRZgVIfNfCTdnqJWj2uVZkvM=; b=6zIuNTUnTok/XyxDmerKbUdhsn
	/m5z9FSQJn6riBo7Vwl2ZlO1Xs3mggO487zPMQ5oKGIZ+PcL+OyfLlqUXMc/fwCF3FgJSFjvu/9bV
	9gvl+c/ADUS1OjVv27Ar2lvrslljxCjKnws+8buIOgeGLR9VNRl/jbnszy0CtITfsy2E=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] xen/sched: fix restore_vcpu_affinity() by removing it
Message-Id: <E1opd0a-00008g-Ky@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:11:52 +0000

commit 88f2bf5de9ad789e1c61b5d5ecf118909eed6917
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon Oct 31 13:23:50 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:23:50 2022 +0100

    xen/sched: fix restore_vcpu_affinity() by removing it
    
    When the system is coming up after having been suspended,
    restore_vcpu_affinity() is called for each domain in order to adjust
    the vcpu's affinity settings in case a cpu didn't come to live again.
    
    The way restore_vcpu_affinity() is doing that is wrong, because the
    specific scheduler isn't being informed about a possible migration of
    the vcpu to another cpu. Additionally the migration is often even
    happening if all cpus are running again, as it is done without check
    whether it is really needed.
    
    As cpupool management is already calling cpu_disable_scheduler() for
    cpus not having come up again, and cpu_disable_scheduler() is taking
    care of eventually needed vcpu migration in the proper way, there is
    simply no need for restore_vcpu_affinity().
    
    So just remove restore_vcpu_affinity() completely, together with the
    no longer used sched_reset_affinity_broken().
    
    Fixes: 8a04eaa8ea83 ("xen/sched: move some per-vcpu items to struct sched_unit")
    Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    master commit: fce1f381f7388daaa3e96dbb0d67d7a3e4bb2d2d
    master date: 2022-10-24 11:16:27 +0100
---
 xen/arch/x86/acpi/power.c |  3 --
 xen/common/sched/core.c   | 78 -----------------------------------------------
 xen/include/xen/sched.h   |  1 -
 3 files changed, 82 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index dd397f7130..1a7baeebe6 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -159,10 +159,7 @@ static void thaw_domains(void)
 
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
-    {
-        restore_vcpu_affinity(d);
         domain_unpause(d);
-    }
     rcu_read_unlock(&domlist_read_lock);
 }
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 900aab8f66..9173cf690c 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1188,84 +1188,6 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit)
     return false;
 }
 
-static void sched_reset_affinity_broken(const struct sched_unit *unit)
-{
-    struct vcpu *v;
-
-    for_each_sched_unit_vcpu ( unit, v )
-        v->affinity_broken = false;
-}
-
-void restore_vcpu_affinity(struct domain *d)
-{
-    unsigned int cpu = smp_processor_id();
-    struct sched_unit *unit;
-
-    ASSERT(system_state == SYS_STATE_resume);
-
-    rcu_read_lock(&sched_res_rculock);
-
-    for_each_sched_unit ( d, unit )
-    {
-        spinlock_t *lock;
-        unsigned int old_cpu = sched_unit_master(unit);
-        struct sched_resource *res;
-
-        ASSERT(!unit_runnable(unit));
-
-        /*
-         * Re-assign the initial processor as after resume we have no
-         * guarantee the old processor has come back to life again.
-         *
-         * Therefore, here, before actually unpausing the domains, we should
-         * set v->processor of each of their vCPUs to something that will
-         * make sense for the scheduler of the cpupool in which they are in.
-         */
-        lock = unit_schedule_lock_irq(unit);
-
-        cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                    cpupool_domain_master_cpumask(d));
-        if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-        {
-            if ( sched_check_affinity_broken(unit) )
-            {
-                sched_set_affinity(unit, unit->cpu_hard_affinity_saved, NULL);
-                sched_reset_affinity_broken(unit);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-
-            if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
-            {
-                /* Affinity settings of one vcpu are for the complete unit. */
-                printk(XENLOG_DEBUG "Breaking affinity for %pv\n",
-                       unit->vcpu_list);
-                sched_set_affinity(unit, &cpumask_all, NULL);
-                cpumask_and(cpumask_scratch_cpu(cpu), unit->cpu_hard_affinity,
-                            cpupool_domain_master_cpumask(d));
-            }
-        }
-
-        res = get_sched_res(cpumask_any(cpumask_scratch_cpu(cpu)));
-        sched_set_res(unit, res);
-
-        spin_unlock_irq(lock);
-
-        /* v->processor might have changed, so reacquire the lock. */
-        lock = unit_schedule_lock_irq(unit);
-        res = sched_pick_resource(unit_scheduler(unit), unit);
-        sched_set_res(unit, res);
-        spin_unlock_irq(lock);
-
-        if ( old_cpu != sched_unit_master(unit) )
-            sched_move_irqs(unit);
-    }
-
-    rcu_read_unlock(&sched_res_rculock);
-
-    domain_update_node_affinity(d);
-}
-
 /*
  * This function is used by cpu_hotplug code via cpu notifier chain
  * and from cpupools to switch schedulers on a cpu.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3f4225738a..1a1fab5239 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -999,7 +999,6 @@ void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:12:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:12:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433250.686182 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0m-00055K-0I; Mon, 31 Oct 2022 22:12:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433250.686182; Mon, 31 Oct 2022 22:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0l-000557-TX; Mon, 31 Oct 2022 22:12:03 +0000
Received: by outflank-mailman (input) for mailman id 433250;
 Mon, 31 Oct 2022 22:12:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0k-00054v-Pm
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0k-0002tt-Ow
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:02 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0k-00009W-O6
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:02 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=ocfaON5/IXWBU/ml8QGt+OA1tHn0fxP6QM4ySd3181Y=; b=2SVEiHwqF64e+GEGxzJxpLaySr
	JPcVCJShZAVCLDfAq9qz1ShJWcpUmGSlh8Oop4zLrvk4ThMJdgrv2DRgLXvbpxblrdcseZfXpki/k
	2kHvYpQUW/LQj7N2AB3kdUVlMpc2FZx5T0xwL6D3legW08iVKRIS00eny5FOV5wybaik=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/shadow: drop (replace) bogus assertions
Message-Id: <E1opd0k-00009W-O6@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:12:02 +0000

commit 9fdb4f17656f74b35af0882b558e44832ff00b5f
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:24:33 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:24:33 2022 +0100

    x86/shadow: drop (replace) bogus assertions
    
    The addition of a call to shadow_blow_tables() from shadow_teardown()
    has resulted in the "no vcpus" related assertion becoming triggerable:
    If domain_create() fails with at least one page successfully allocated
    in the course of shadow_enable(), or if domain_create() succeeds and
    the domain is then killed without ever invoking XEN_DOMCTL_max_vcpus.
    Note that in-tree tests (test-resource and test-tsx) do exactly the
    latter of these two.
    
    The assertion's comment was bogus anyway: Shadow mode has been getting
    enabled before allocation of vCPU-s for quite some time. Convert the
    assertion to a conditional: As long as there are no vCPU-s, there's
    nothing to blow away.
    
    Fixes: e7aa55c0aab3 ("x86/p2m: free the paging memory pool preemptively")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    
    A similar assertion/comment pair exists in _shadow_prealloc(); the
    comment is similarly bogus, and the assertion could in principle trigger
    e.g. when shadow_alloc_p2m_page() is called early enough. Replace those
    at the same time by a similar early return, here indicating failure to
    the caller (which will generally lead to the domain being crashed in
    shadow_prealloc()).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a92dc2bb30ba65ae25d2f417677eb7ef9a6a0fef
    master date: 2022-10-24 15:46:11 +0200
---
 xen/arch/x86/mm/shadow/common.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 3b0d781991..1de0139742 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -943,8 +943,9 @@ static bool __must_check _shadow_prealloc(struct domain *d, unsigned int pages)
         /* No reclaim when the domain is dying, teardown will take care of it. */
         return false;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to reclaim when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return false;
 
     /* Stage one: walk the list of pinned pages, unpinning them */
     perfc_incr(shadow_prealloc_1);
@@ -1034,8 +1035,9 @@ void shadow_blow_tables(struct domain *d)
     mfn_t smfn;
     int i;
 
-    /* Shouldn't have enabled shadows if we've no vcpus. */
-    ASSERT(d->vcpu && d->vcpu[0]);
+    /* Nothing to do when there are no vcpus yet. */
+    if ( !d->vcpu[0] )
+        return;
 
     /* Pass one: unpin all pinned pages */
     foreach_pinned_shadow(d, sp, t)
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:12:15 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:12:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433251.686187 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0x-00058O-2I; Mon, 31 Oct 2022 22:12:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433251.686187; Mon, 31 Oct 2022 22:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd0w-00058G-V9; Mon, 31 Oct 2022 22:12:14 +0000
Received: by outflank-mailman (input) for mailman id 433251;
 Mon, 31 Oct 2022 22:12:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0u-000583-SU
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0u-0002u3-Rt
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:12 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd0u-0000A7-R7
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:12 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=AlVytY/+YDSUrDnvggEu4814MSufMHQz41xfS4Ra+OQ=; b=CvVGXm6z9MqqiWwGZSS/kN11eo
	uCH/EMrJX7lIaEHFEspIk/Ruf3stUHdVcxHClSaTThWa5f4ieMyWEJNs8mzygMDtDGPWg/jApfVow
	0PY90TEzxOmijhbreAchTaw3FZydGnpcfXSQAnHtGedTa5Y88xS1Rkzq5WCiClhTuCf8=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] vpci: don't assume that vpci per-device data exists unconditionally
Message-Id: <E1opd0u-0000A7-R7@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:12:12 +0000

commit 96d26f11f56e83b98ec184f4e0d17161efe3a927
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Mon Oct 31 13:25:13 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:25:13 2022 +0100

    vpci: don't assume that vpci per-device data exists unconditionally
    
    It's possible for a device to be assigned to a domain but have no
    vpci structure if vpci_process_pending() failed and called
    vpci_remove_device() as a result.  The unconditional accesses done by
    vpci_{read,write}() and vpci_remove_device() to pdev->vpci would
    then trigger a NULL pointer dereference.
    
    Add checks for pdev->vpci presence in the affected functions.
    
    Fixes: 9c244fdef7 ('vpci: add header handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 6ccb5e308ceeb895fbccd87a528a8bd24325aa39
    master date: 2022-10-26 14:55:30 +0200
---
 xen/drivers/vpci/vpci.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index dfc8136ffb..53d78d5391 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -37,7 +37,7 @@ extern vpci_register_init_t *const __end_vpci_array[];
 
 void vpci_remove_device(struct pci_dev *pdev)
 {
-    if ( !has_vpci(pdev->domain) )
+    if ( !has_vpci(pdev->domain) || !pdev->vpci )
         return;
 
     spin_lock(&pdev->vpci->lock);
@@ -326,7 +326,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size)
 
     /* Find the PCI dev matching the address. */
     pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
         return vpci_read_hw(sbdf, reg, size);
 
     spin_lock(&pdev->vpci->lock);
@@ -436,7 +436,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
      * Passthrough everything that's not trapped.
      */
     pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->vpci )
     {
         vpci_write_hw(sbdf, reg, size, data);
         return;
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:12:25 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:12:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433253.686189 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd17-0005BZ-3E; Mon, 31 Oct 2022 22:12:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433253.686189; Mon, 31 Oct 2022 22:12:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd17-0005BR-0R; Mon, 31 Oct 2022 22:12:25 +0000
Received: by outflank-mailman (input) for mailman id 433253;
 Mon, 31 Oct 2022 22:12:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd15-0005Az-27
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd15-0002uA-1S
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:23 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd14-0000Aa-UD
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:22 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=3WTumRSwsW7p8XqHi01E3vbA2pLkb4yuWkIKs1IwWyY=; b=YRdL3LYwzl+h7HC+5VfaqY4brO
	YQ1ZGe8sPnrLvRJ6sl6MtH+uyDRr4oIskjPt8qt+8SJdPEuyM2R12NTGmtqAweFFHB17rv+avPI9B
	nkPK7R9aO7H3GvGWwZ5zMsxVJTap1CxxytW8andjz6NIatvNrtUD/h0d9YXglGeI+sbk=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] vpci/msix: remove from table list on detach
Message-Id: <E1opd14-0000Aa-UD@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:12:22 +0000

commit 8f3f8f20de5cea704671d4ca83f2dceb93ab98d8
Author:     Roger Pau Monné <roger.pau@citrix.com>
AuthorDate: Mon Oct 31 13:25:40 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:25:40 2022 +0100

    vpci/msix: remove from table list on detach
    
    Teardown of MSIX vPCI related data doesn't currently remove the MSIX
    device data from the list of MSIX tables handled by the domain,
    leading to a use-after-free of the data in the msix structure.
    
    Remove the structure from the list before freeing in order to solve
    it.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Fixes: d6281be9d0 ('vpci/msix: add MSI-X handlers')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c14aea137eab29eb9c30bfad745a00c65ad21066
    master date: 2022-10-26 14:56:58 +0200
---
 xen/drivers/vpci/vpci.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 53d78d5391..b9339f8f3e 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -51,8 +51,12 @@ void vpci_remove_device(struct pci_dev *pdev)
         xfree(r);
     }
     spin_unlock(&pdev->vpci->lock);
-    if ( pdev->vpci->msix && pdev->vpci->msix->pba )
-        iounmap(pdev->vpci->msix->pba);
+    if ( pdev->vpci->msix )
+    {
+        list_del(&pdev->vpci->msix->next);
+        if ( pdev->vpci->msix->pba )
+            iounmap(pdev->vpci->msix->pba);
+    }
     xfree(pdev->vpci->msix);
     xfree(pdev->vpci->msi);
     xfree(pdev->vpci);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:12:35 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:12:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433255.686193 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1H-0005EF-4t; Mon, 31 Oct 2022 22:12:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433255.686193; Mon, 31 Oct 2022 22:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1H-0005E8-20; Mon, 31 Oct 2022 22:12:35 +0000
Received: by outflank-mailman (input) for mailman id 433255;
 Mon, 31 Oct 2022 22:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1F-0005Dp-57
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1F-0002ug-4O
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:33 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1F-0000B9-3i
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:33 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=n7znlOlLP7Xo2bZY6UUpnbcYl2/OxFvLZkOoBeGWW/k=; b=oAggUuqD3CDc2vd474Ubf3+r6P
	aorVpY0qfHj2TKWqJQAr+Vj363vrPFlvbW1ZbxnSBIZmc4uoOts/O7CUZSQLeDPheOkM4zAUL0dfw
	z7NUvxMWCojcsHhTgb0vhXxVpH5QYkLJfoFJims0S1NjDWyFpbtkF3Q0kSnVeMjw4+fQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86: also zap secondary time area handles during soft reset
Message-Id: <E1opd1F-0000B9-3i@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:12:33 +0000

commit aac108509055e5f5ff293e1fb44614f96a0996c6
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:26:08 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:26:08 2022 +0100

    x86: also zap secondary time area handles during soft reset
    
    Just like domain_soft_reset() properly zaps runstate area handles, the
    secondary time area ones also need discarding to prevent guest memory
    corruption once the guest is re-started.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: b80d4f8d2ea6418e32fb4f20d1304ace6d6566e3
    master date: 2022-10-27 11:49:09 +0200
---
 xen/arch/x86/domain.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a4356893bd..3fab2364be 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -929,6 +929,7 @@ int arch_domain_soft_reset(struct domain *d)
     struct page_info *page = virt_to_page(d->shared_info), *new_page;
     int ret = 0;
     struct domain *owner;
+    struct vcpu *v;
     mfn_t mfn;
     gfn_t gfn;
     p2m_type_t p2mt;
@@ -1008,7 +1009,12 @@ int arch_domain_soft_reset(struct domain *d)
                "Failed to add a page to replace %pd's shared_info frame %"PRI_gfn"\n",
                d, gfn_x(gfn));
         free_domheap_page(new_page);
+        goto exit_put_gfn;
     }
+
+    for_each_vcpu ( d, v )
+        set_xen_guest_handle(v->arch.time_info_guest, NULL);
+
  exit_put_gfn:
     put_gfn(d, gfn_x(gfn));
  exit_put_page:
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:12:44 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:12:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433256.686198 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1Q-0005Hi-8Z; Mon, 31 Oct 2022 22:12:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433256.686198; Mon, 31 Oct 2022 22:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1Q-0005Hb-5f; Mon, 31 Oct 2022 22:12:44 +0000
Received: by outflank-mailman (input) for mailman id 433256;
 Mon, 31 Oct 2022 22:12:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1P-0005HR-8d
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1P-0002uk-83
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:43 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1P-0000BY-6d
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:43 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=Ei2fBxLAPQtXNrA6vds4J+e6NuMHs2imiVgRwgm3TmY=; b=i2oEsLjlDwhnOYKmfbnUtQgaVF
	rHab22Lkd0U5qmhlv5Kn5rkioVcKw0VVUSXMhvjg7uunUzUjitms7gMwrsvzuz/xocHRxA+sL59QP
	xKXjFgds850pfrOmSOdi2bAzwGSJVeRwqs0zLJ+yHGZANPPtMR9+11rb02L0PghBI/GI=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] common: map_vcpu_info() wants to unshare the underlying page
Message-Id: <E1opd1P-0000BY-6d@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:12:43 +0000

commit 426a8346c01075ec5eba4aadefab03a96b6ece6a
Author:     Jan Beulich <jbeulich@suse.com>
AuthorDate: Mon Oct 31 13:26:33 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:26:33 2022 +0100

    common: map_vcpu_info() wants to unshare the underlying page
    
    Not passing P2M_UNSHARE to get_page_from_gfn() means there won't even be
    an attempt to unshare the referenced page, without any indication to the
    caller (e.g. -EAGAIN). Note that guests have no direct control over
    which of their pages are shared (or paged out), and hence they have no
    way to make sure all on their own that the subsequent obtaining of a
    writable type reference can actually succeed.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: 48980cf24d5cf41fd644600f99c753419505e735
    master date: 2022-10-28 11:38:32 +0200
---
 xen/common/domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 56d47dd664..e3afcacb6c 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1471,7 +1471,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     if ( (v != current) && !(v->pause_flags & VPF_down) )
         return -EINVAL;
 
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    page = get_page_from_gfn(d, gfn, NULL, P2M_UNSHARE);
     if ( !page )
         return -EINVAL;
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:12:54 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:12:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433257.686202 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1a-0005KV-AJ; Mon, 31 Oct 2022 22:12:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433257.686202; Mon, 31 Oct 2022 22:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1a-0005KN-7S; Mon, 31 Oct 2022 22:12:54 +0000
Received: by outflank-mailman (input) for mailman id 433257;
 Mon, 31 Oct 2022 22:12:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1Z-0005KH-BW
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1Z-0002uu-Ar
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:53 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1Z-0000Bz-AD
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:12:53 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=48N4buYwOZrwCnGzT8pxBTJfAHHQPReP2rBzTP1Y82I=; b=3jNFrkGLm58uYM6UqPz2lNs7Tj
	AqB898Kcmd4Uqt9qp8HrWdIVYxRU8fu6SVvyd2uTmdCOr7MtbpTBVSle4/N+cesB4tZXt95bas4oT
	9jg2r98KPwdk2JkCn7qFAGzIyLx/2WUIGYv+IeA1ySpo4M78qIXgebRKJfpkLkCHB7NQ=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/pv-shim: correctly ignore empty onlining requests
Message-Id: <E1opd1Z-0000Bz-AD@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:12:53 +0000

commit 08f6c88405a4406cac5b90e8d9873258dc445006
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:26:59 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:26:59 2022 +0100

    x86/pv-shim: correctly ignore empty onlining requests
    
    Mem-op requests may have zero extents. Such requests need treating as
    no-ops. pv_shim_online_memory(), however, would have tried to take 2³²-1
    order-sized pages from its balloon list (to then populate them),
    typically ending when the entire set of ballooned pages of this order
    was consumed.
    
    Note that pv_shim_offline_memory() does not have such an issue.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9272225ca72801fd9fa5b268a2d1c5adebd19cd9
    master date: 2022-10-28 15:47:59 +0200
---
 xen/arch/x86/pv/shim.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index d9704121a7..4146ee3f9c 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -944,6 +944,9 @@ void pv_shim_online_memory(unsigned int nr, unsigned int order)
     struct page_info *page, *tmp;
     PAGE_LIST_HEAD(list);
 
+    if ( !nr )
+        return;
+
     spin_lock(&balloon_lock);
     page_list_for_each_safe ( page, tmp, &balloon )
     {
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:13:04 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:13:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433258.686206 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1k-0005NF-CE; Mon, 31 Oct 2022 22:13:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433258.686206; Mon, 31 Oct 2022 22:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1k-0005N7-99; Mon, 31 Oct 2022 22:13:04 +0000
Received: by outflank-mailman (input) for mailman id 433258;
 Mon, 31 Oct 2022 22:13:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1j-0005Ms-EU
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:13:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1j-0002vH-Ds
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:13:03 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1j-0000Cd-DA
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:13:03 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=W33kYyQZJqaw40d8w7IGoq7kJfuQ633mQhkNFvABqu4=; b=zTmajJcz9NnxDjZXLHYZ3fLH7I
	2xTwzGcpMV9UIwWCtiVg97ET6Tqv8ZtDA2RcaeO2i/R1KbfjkQbeiWzbTpxc4rQ+NBvj2iIaUve1k
	gfl0XP2cVtpdLrtEgkXyg1tVmvVlBR7j9agzw0qnvAlJmxxQ9GjD0ZrDKaMfcvNrvjEA=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/pv-shim: correct ballooning up for compat guests
Message-Id: <E1opd1j-0000Cd-DA@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:13:03 +0000

commit 2f75e3654f00a62bd1f446a7424ccd56750a2e15
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:28:15 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:28:15 2022 +0100

    x86/pv-shim: correct ballooning up for compat guests
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. It's only that case when the function would
    issue a call to pv_shim_online_memory(), yet the range then covers only
    the first sub-range that results from the split.
    
    Address that breakage by making a complementary call to
    pv_shim_online_memory() in compat layer.
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: a0bfdd201ea12aa5679bb8944d63a4e0d3c23160
    master date: 2022-10-28 15:48:50 +0200
---
 xen/common/compat/memory.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index c43fa97cf1..a0e0562a40 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -7,6 +7,7 @@ EMIT_FILE;
 #include <xen/event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
+#include <asm/guest.h>
 #include <compat/memory.h>
 
 #define xen_domid_t domid_t
@@ -146,7 +147,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 nat.rsrv->nr_extents = end_extent;
                 ++split;
             }
-
+           /* Avoid calling pv_shim_online_memory() when in a continuation. */
+           if ( pv_shim && op != XENMEM_decrease_reservation && !start_extent )
+               pv_shim_online_memory(cmp.rsrv.nr_extents - nat.rsrv->nr_extents,
+                                     cmp.rsrv.extent_order);
             break;
 
         case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


From xen-changelog-bounces@lists.xenproject.org Mon Oct 31 22:13:14 2022
Return-path: <xen-changelog-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 Oct 2022 22:13:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.433259.686210 (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1u-0005QA-Da; Mon, 31 Oct 2022 22:13:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 433259.686210; Mon, 31 Oct 2022 22:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-changelog-bounces@lists.xenproject.org>)
	id 1opd1u-0005Q2-Ap; Mon, 31 Oct 2022 22:13:14 +0000
Received: by outflank-mailman (input) for mailman id 433259;
 Mon, 31 Oct 2022 22:13:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1t-0005Pt-Hf
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:13:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1t-0002vY-H0
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:13:13 +0000
Received: from xen by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <ian.jackson@eu.citrix.com>) id 1opd1t-0000D6-GA
 for xen-changelog@lists.xenproject.org; Mon, 31 Oct 2022 22:13:13 +0000
X-BeenThere: xen-changelog@lists.xenproject.org
List-Id: "Change log for Mercurial \(receive only\)"
 <xen-changelog.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-changelog@lists.xenproject.org>
List-Help: <mailto:xen-changelog-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-changelog>, 
 <mailto:xen-changelog-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-changelog-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-changelog" <xen-changelog-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:Reply-To:To:From;
	bh=aHHxS58Vc1Y6IrFsTT1HtkbMe7jK2yrsZAuKyQ2ubnA=; b=us6jGqwQP1V76/BLNjiHUlnaYT
	0jE8AY3+ythRiZXGO8t1TC15K3g4cXH8hAdCQPU7j6wS4jeVkzj3piJHf/oQ6+kQczCaYTdjPdGnC
	THDT2oqOOI0hJo2GxV547F0X0VwV3ruRG45NvTpDhjrkQcFznw2ESkUYf6GLY7r8W9AM=;
From: patchbot@xen.org
To: xen-changelog@lists.xenproject.org
Reply-To: xen-devel@lists.xenproject.org
Subject: [xen stable-4.16] x86/pv-shim: correct ballooning down for compat guests
Message-Id: <E1opd1t-0000D6-GA@xenbits.xenproject.org>
Date: Mon, 31 Oct 2022 22:13:13 +0000

commit c229b16ba3eb5579a9a5d470ab16dd9ad55e57d6
Author:     Igor Druzhinin <igor.druzhinin@citrix.com>
AuthorDate: Mon Oct 31 13:28:46 2022 +0100
Commit:     Jan Beulich <jbeulich@suse.com>
CommitDate: Mon Oct 31 13:28:46 2022 +0100

    x86/pv-shim: correct ballooning down for compat guests
    
    The compat layer for multi-extent memory ops may need to split incoming
    requests. Since the guest handles in the interface structures may not be
    altered, it does so by leveraging do_memory_op()'s continuation
    handling: It hands on non-initial requests with a non-zero start extent,
    with the (native) handle suitably adjusted down. As a result
    do_memory_op() sees only the first of potentially several requests with
    start extent being zero. In order to be usable as overall result, the
    function accumulates args.nr_done, i.e. it initialized the field with
    the start extent. Therefore non-initial requests resulting from the
    split would pass too large a number into pv_shim_offline_memory().
    
    Address that breakage by always calling pv_shim_offline_memory()
    regardless of current hypercall preemption status, with a suitably
    adjusted first argument. Note that this is correct also for the native
    guest case: We now simply "commit" what was completed right away, rather
    than at the end of a series of preemption/re-start cycles. In fact this
    improves overall preemption behavior: There's no longer a potentially
    big chunk of work done non-preemptively at the end of the last
    "iteration".
    
    Fixes: b2245acc60c3 ("xen/pvshim: memory hotplug")
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 1d7fbc535d1d37bdc2cc53ede360b0f6651f7de1
    master date: 2022-10-28 15:49:33 +0200
---
 xen/common/memory.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 064de4ad8d..76f8858cc3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1420,22 +1420,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         rc = args.nr_done;
 
-        if ( args.preempted )
-            return hypercall_create_continuation(
-                __HYPERVISOR_memory_op, "lh",
-                op | (rc << MEMOP_EXTENT_SHIFT), arg);
-
 #ifdef CONFIG_X86
         if ( pv_shim && op == XENMEM_decrease_reservation )
-            /*
-             * Only call pv_shim_offline_memory when the hypercall has
-             * finished. Note that nr_done is used to cope in case the
-             * hypercall has failed and only part of the extents where
-             * processed.
-             */
-            pv_shim_offline_memory(args.nr_done, args.extent_order);
+            pv_shim_offline_memory(args.nr_done - start_extent,
+                                   args.extent_order);
 #endif
 
+        if ( args.preempted )
+           return hypercall_create_continuation(
+                __HYPERVISOR_memory_op, "lh",
+                op | (rc << MEMOP_EXTENT_SHIFT), arg);
+
         break;
 
     case XENMEM_exchange:
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.16


