[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v5 01/10] mem_sharing/fork: do not attempt to populate vcpu_info page


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx, henry.wang@xxxxxxx
  • From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Date: Mon, 2 Oct 2023 17:11:18 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZCapLEs8x3+5/GTRGxm23TN+ireMD0Zi0vUFCTJ5I3w=; b=l7h3pI3JIvJiXdbeigx0e4H9hGp/8wxEaCAutFBjgXLIVAg31fSEcvg/e4BLPXhrOp4vmqfcP8te/y+9MapuaTTPpZtGytfrnrpKkYyPbi1lFWP/IBf7vP16akdg1Oos4k9XKdACSHnf2bNcVG/Zhaq/TU7HNtROiMqw4QOLok737SfSdv63usZhoBGzBXU7t6dZ1A4ssSEHM9bP8ANbaKJbF38YNask0bGtaS0kiI9TZHPaLZuAGqwczVDRkANV9Hi7TQH72wh07WY2LJqh+gVyJ/3PlYkm/oihADekZYK1oYbaVca01WJTfhOSmYYeR5Uoizno4+Hue1cCW846tg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lilj1/5dN+dgDy3LLIVwcnFp/soHhLmWTGnXbItjcHR9dZZfHYrajoXWn6K1UfBIBZl3rEhxwh3E2ZZaJCJPllHQ+IfwTu1Sv2GIpOXf3KOM5V+UC5/eGc0He5dw4r3VYlGgIklxFQhmjoG1U9qLH7ZBpNXvZ0qxUp1nbgGHypzrde9gBvFoZIfVS1xaRpeTYdMkNyeFY1sHYirJ0zK3VYnp/0P34HyRbZGCpff3IyAwq3CC/1603zRos8NVsuZ7L7E+NTTzupBnwyhlKGJ61BE7FH1ZOsJOad+5ivPAzRWANp9Qp07PWVE82Tgc6+9KY7nCaOowXwQg120uJwzGjg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Mon, 02 Oct 2023 15:12:06 +0000
  • Ironport-data: A9a23:TnjRJKyOvtTJXgsFAxF6t+f+xyrEfRIJ4+MujC+fZmUNrF6WrkUPz 2ccWGvSa6mIN2LyKI9/a9yy8RhU7JPWn4NqGVRkpCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s ppEOrEsCOhuExcwcz/0auCJQUFUjPzOHvykTrecZkidfCc8IA85kxVvhuUltYBhhNm9Emult Mj75sbSIzdJ4RYtWo4vw/zF8EgHUMja4mtC5QVmPaoT5TcyqlFOZH4hDfDpR5fHatE88t6SH 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KTFc/ u4YeHcnVSKooN/vyongCbh1vdt2eaEHPKtH0p1h5RfwKK9/BLzmHeDN79Ie2yosjMdTG/qYf 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjeVlVMuuFTuGIO9ltiiX8Jak1zev mvb12/4HgsbJJqUzj/tHneE37WUxXqlBdpDfFG+3uVanUCI6mJOM0cfVUGDsciXgFLkWPsKf iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZBZdo8pYkpTDol/ laTmpXiAjkHmKGcTHuG3quXqT60NjkYKSkJYipsZQkP7sTnoYozpgnSVdslG6mw5vXqHRngz jbMqzIx74j/luYO3qS/uFzC3TSlo8GQShZvv1qIGGW48gl+eYipIZSy7kTW5upBK4DfSUSdu H8DmI6V6+Vm4YyxqRFhid4lRNmBj8tp+hWH6bKzN/HNLwiQxkM=
  • Ironport-hdrordr: A9a23:xqpZz6FN9m1+7GGxpLqEHseALOsnbusQ8zAXPiBKJCC9vPb5qy nOpoV86faQslwssR4b9uxoVJPvfZqYz+8W3WBzB8bEYOCFghrKEGgK1+KLrwEIWReOk9K1vZ 0KT0EUMqyVMbEVt6fHCAnTKade/DGEmprY+9s3GR1WPHBXg6IL1XYINu6CeHcGPTWvnfACZe ehDswsnUvZRV0nKv6VK1MiROb5q9jChPvdEGI7705O0nj0sduwgoSKaSSl4g==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Instead let map_vcpu_info() and it's call to get_page_from_gfn()
populate the page in the child as needed.  Also remove the bogus
copy_domain_page(): should be placed before the call to map_vcpu_info(),
as the later can update the contents of the vcpu_info page.

Note that this eliminates a bug in copy_vcpu_settings(): The function did
allocate a new page regardless of the GFN already having a mapping, thus in
particular breaking the case of two vCPU-s having their info areas on the same
page.

Fixes: 41548c5472a3 ('mem_sharing: VM forking')
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
Only build tested.
---
Changes since v4:
 - New in this version.
---
 xen/arch/x86/mm/mem_sharing.c | 36 ++++++-----------------------------
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index ae5366d4476e..5f8f1fb4d871 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1689,48 +1689,24 @@ static int copy_vcpu_settings(struct domain *cd, const 
struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m = p2m_get_hostp2m(cd);
     int ret = -EINVAL;
+    mfn_t vcpu_info_mfn;
 
     for ( i = 0; i < cd->max_vcpus; i++ )
     {
         struct vcpu *d_vcpu = d->vcpu[i];
         struct vcpu *cd_vcpu = cd->vcpu[i];
-        mfn_t vcpu_info_mfn;
 
         if ( !d_vcpu || !cd_vcpu )
             continue;
 
-        /* Copy & map in the vcpu_info page if the guest uses one */
+        /* Map in the vcpu_info page if the guest uses one */
         vcpu_info_mfn = d_vcpu->vcpu_info_mfn;
         if ( !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
         {
-            mfn_t new_vcpu_info_mfn = cd_vcpu->vcpu_info_mfn;
-
-            /* Allocate & map the page for it if it hasn't been already */
-            if ( mfn_eq(new_vcpu_info_mfn, INVALID_MFN) )
-            {
-                gfn_t gfn = mfn_to_gfn(d, vcpu_info_mfn);
-                unsigned long gfn_l = gfn_x(gfn);
-                struct page_info *page;
-
-                if ( !(page = alloc_domheap_page(cd, 0)) )
-                    return -ENOMEM;
-
-                new_vcpu_info_mfn = page_to_mfn(page);
-                set_gpfn_from_mfn(mfn_x(new_vcpu_info_mfn), gfn_l);
-
-                ret = p2m->set_entry(p2m, gfn, new_vcpu_info_mfn,
-                                     PAGE_ORDER_4K, p2m_ram_rw,
-                                     p2m->default_access, -1);
-                if ( ret )
-                    return ret;
-
-                ret = map_vcpu_info(cd_vcpu, gfn_l,
-                                    PAGE_OFFSET(d_vcpu->vcpu_info));
-                if ( ret )
-                    return ret;
-            }
-
-            copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
+            ret = map_vcpu_info(cd_vcpu, mfn_to_gfn(d, vcpu_info_mfn),
+                                PAGE_OFFSET(d_vcpu->vcpu_info));
+            if ( ret )
+                return ret;
         }
 
         ret = copy_vpmu(d_vcpu, cd_vcpu);
-- 
2.42.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.