[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/common: Do not allocate magic pages 1:1 for direct mapped domains


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Henry Wang <xin.wang2@xxxxxxx>
  • Date: Tue, 27 Feb 2024 21:35:56 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zSQk1YnS2Ht4wfo1Q19hBrTpkLntDCtVYNW8+lCGlPA=; b=fS6ddMSZUW48YXNdQIzdcGf575I9qCqWh/s1Or+RoOhzQxFTUiTcHITNlfDPqpPIJZoeSCvPCeO6ZyU2uXKUdAZjafzAiWNmHtNBmrs73ZlpCsNIvl7QrxiJvmTPSNHBDK19Jfrk0jIEZFHPNnNNCxuX9ERxVOX/VXjUCTv6ix5z2NMeJVWsXUb7sREDvqxSh1sY9J5aENuStggq1EM2ZpxpJvy4a6Npe0KyKaY+rm2nDB3oYcDh0dthAsSEteCmJ53jvB8YgTn15aCuSvsH8p2YHi5E4Qm/A8iTtPWpuXdUAjLRcjAYo+dVppjVK2eS8upMGpxVlrR2ssp6/krySg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hIOhsWxuaW5JvRh8Bfubdk3s3KP+ONk6cOvYRDrEcEWG8prmJPIlqQCKc4wp5UrHJ4U50L++zBfAEZUcJTHVkTM/6lPyj0q/FV17R50UWdmzbdnRxD0C2p0nSEF3biMTB5GVaCQTtMIE+GWX0VZvQkIqLIhQGLPggwldC9ddD21e1IrrmldNKFhR3R7WRVwlXeg3eMGFYVS3k7JIfyIW3mc99TfqAoYFzKuHyHD837oQ9VcrHHHj5DWKKlr64/nmok1If7dbN8X/RcDs2jxork5CzR4Ot6KExKBFJt0susL0KfzjxeQRhekuEq22bK4i6Q7K/pintt5emi0+V/Hwtg==
  • Cc: Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, "Michal Orzel" <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Alec Kwapis <alec.kwapis@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 27 Feb 2024 13:36:12 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Jan,

On 2/27/2024 9:27 PM, Jan Beulich wrote:
On 27.02.2024 14:24, Henry Wang wrote:
On 2/26/2024 4:25 PM, Jan Beulich wrote:
On 26.02.2024 02:19, Henry Wang wrote:
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -219,7 +219,7 @@ static void populate_physmap(struct memop_args *a)
           }
           else
           {
-            if ( is_domain_direct_mapped(d) )
+            if ( is_domain_direct_mapped(d) && !is_magic_gpfn(gpfn) )
               {
                   mfn = _mfn(gpfn);
I wonder whether is_domain_direct_mapped() shouldn't either be cloned
into e.g. is_gfn_direct_mapped(d, gfn), or be adjusted in-place to gain
such a (then optional) 2nd parameter. (Of course there again shouldn't be
a need for every domain to define a stub is_domain_direct_mapped() - if
not defined by an arch header, the stub can be supplied in a single
central place.)
Same here, it looks like you prefer the centralized
is_domain_direct_mapped() now, as we are having more archs. I can do the
clean-up when sending v2. Just out of curiosity, do you think it is a
good practice to place the is_domain_direct_mapped() implementation in
xen/domain.h with proper arch #ifdefs?
arch #ifdefs? I'd like to avoid such, if at all possible. Generic fallbacks
generally ought to key their conditionals to the very identifier not
(already) being defined.

I meant something like this (as I saw CDF_directmap is currently gated in this way in xen/domain.h):

#ifdef CONFIG_ARM
#define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
#else
#define is_domain_direct_mapped(d) ((void)(d), 0)
#endif

I am having trouble to think of another way to keep the function in a centralized place while
avoiding the #ifdefs. Would you mind elaborating a bit? Thanks!

Kind regards,
Henry


Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.