[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/3] x86/mem_sharing: option to skip populating special pages during fork


  • To: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 24 Mar 2022 15:50:32 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2U8iZLqCEtxFsMpBC75M2ipq4mgeareS1zCb6P8IDGE=; b=VzOrcW3KCWQMq2ZCtREz4tQTYzQEaKjv/YJ7sDzAmF8iK/wffpSPvM0p7EQLGEqUAJkTGng1nrD0VdmSKrWOBbnaGaIcwoUUEtHNTetc6I06YfCv040UajyRy46EnjwNvDByV8ZYZhqXKKqXvpnn8xn7Fhh658s6lOH9oYBKTTVJvAYOB96ijCxVKd8Y/h+aoApnGF4d33VbzCC3WjHuRyNRXGMwUvy9nrxd1LiNbyYNQuY7QHEvSgn7T0kzXwIqrmFuqKrMiTN69dOyvvHBRqBGfwBp+Wqhh/FKfh5DVdRmvdV5Ppdk2gpPX826hiEwYXS+N2zxGE7hMiaEGY/QZw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PfFM4NMrhSWNAv9I/bGO7CIDLk5Q5AWwO9VRvj0zwnJwALvAI6dakggM+XkMHFECiyDzts+0+phAdFYIIz7eHvrQmxYwXCY1oDzoWf8tTe7/9ZV8QbfrPOzyeHSVkukrOr7d6J7k2MXeTGKaOgxlDrIDbIs7Tq5Bqqn8EU08XoasFP8Dow9qKEr1zw9Z9kgRDSDP1l3HY0s0w2/s+M21R9AE1HsH/6yQFQrTQ3YIWxHCkVJttF/RIFxL/Sue2oU0a2/Zn/Ay1ymEfoPfFSrf7DHNRhiAhe3VDwJHGdSrviMBGPcL3zCWl5MqyxWFo13mdcAjRC1WyiYPSQf6kSIJHw==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 24 Mar 2022 14:50:50 +0000
  • Ironport-data: A9a23:smqeaa5OqscnTsH7pEDvrAxRtNrHchMFZxGqfqrLsTDasY5as4F+v jAZDWDTPv2OZmTwf9AiPovlo0NT6pbWzYBiHAJk/HhkHi5G8cbLO4+Ufxz6V8+wwmwvb67FA +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw0XqPp8Zj2tQy2YPjX1vX0 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx umhurS5FygVY/DCk98QeABeDT9QeodIu7LudC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG f4wcW5XKErZ3qTvnez9GrIEascLdaEHOKsFvX5t13fBBOsOSpHfWaTao9Rf2V/cg+gQQq6OP ptCMFKDajzvREcIFA0TLKgSs9Wwm0feaCNUthGa8P9fD2/7k1UqjemF3MDuUt6XQcRYmG6Iq 2SA+H72ajkBL8CWwzeB9nOqh8fMkDn9VYZUE6e3ntZ2iVia3UQPCxkbU1SqrP3/gUm7M/pAL 2QE9yxoqrI9nGS3R9z0RFu8rXiLrxMYc9tWD+A+rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0 EeTmNHkDiApt6eaIVqf/LqJqTK5OQAOMHQPIyQDSGM4D8LL+d9pyEiVF5A6TfDz3oad9SzML y6ipXYRu+hOg8Uw2o6i+07ZrXWGqNvEZ1tgjunIZV6N4gR8bY+jQoWn71nH8PpNRLqkokm9U GsswJbHsr1XZX2ZvGnUGbhWQun1jxqQGGeE6WODCaXN4NhEF5SLWYlLqA9zK05yWirvUW+4O RSD0e+9CXI6AZdLUUOVS9/pYyjJ5fK5fTgAahwyRoARCnSWXFXblByCnWbKgwjQfLEEyMnTw 6uzf8e2Fmo9Aq961jewTOp1+eZ1mnBmmzuDHs6jkkTPPV+iiJi9E+dt3LymNL1R0U95iF+Nr 4Y32zWilX2zr9ESkgGIqNVOfDjm3FAwBIzsqtw/SwJwClEOJY3VMNeImelJU9U8x8x9z76Ul lngCh4w4Aeu3hXvdFTVAk2PnZuyBP6TW1pgZndyVbtpslB+CbuSAFA3K8RmJ+V2pbY/pRO2J tFcE/i97j10Ym2v0xwWbIXnrZwkcxKuhAmUODGibiR5dJllLzElMPe9Fucz3EHi1haKiPY=
  • Ironport-hdrordr: A9a23:AA4zDK0TNJ39AcdYN8gNyAqjBV5yeYIsimQD101hICG9Lfb2qy n+ppgmPEHP5Qr5OEtApTiBUJPwJk800aQFm7X5XI3SJzUO3VHHEGgM1/qB/9SNIVyaygcZ79 YcT0EcMqyPMbEZt7eC3ODQKb9Jq7PmgcOVbKXlvg9QpGlRGt5dBmxCe2Cm+yNNNW177c1TLu vh2iMLnUvoRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirC2Dlymh5rLWGwWRmk52aUIF/Z4StU z+1yDp7KSqtP+2jjfaym/o9pxT3P/s0MFKCsCggtUcbh/slgGrToJ8XKDqhkF8nMifrHIR1P XcqRYpOMp+r1vXY2GOuBPonzLt1T4/gkWSvWOwsD/Gm4jUVTg6A81OicZyaR3C8Xctu9l6ze Ziw3+Zn4A/N2KNoA3No/zzEz16nEu9pnQv1cQJiWZEbIcYYLhN6aQC4UJuFosaFi6S0vFrLA BXNrCT2B9qSyLaU5iA1VMfgOBEH05DVCtue3Jy9fB8iFNt7TNEJ0hx/r1rop5PzuN+d3B+3Z W1Dk1ZrsA+ciYoV9MPOA4ge7rBNoWfe2O7DIqtSW6XZp3vfUi97qLK3A==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Mar 22, 2022 at 01:41:37PM -0400, Tamas K Lengyel wrote:
> Add option to the fork memop to skip populating the fork with special pages.
> These special pages are only necessary when setting up forks to be fully
> functional with a toolstack. For short-lived forks where no toolstack is 
> active
> these pages are uneccesary.

I'm not sure those are strictly related to having a toolstack. For
example the vcpu_info has nothing to do with having a toolstack, and
is only used by the guest in order to receive events or fetch time
related data. So while a short-lived fork might not make use of those,
that has nothing to do with having a toolstack or not.

> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>
> ---
>  xen/arch/x86/include/asm/hvm/domain.h |  4 +++-
>  xen/arch/x86/mm/mem_sharing.c         | 33 +++++++++++++++++----------
>  xen/include/public/memory.h           |  4 ++--
>  3 files changed, 26 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/x86/include/asm/hvm/domain.h 
> b/xen/arch/x86/include/asm/hvm/domain.h
> index 698455444e..446cd06411 100644
> --- a/xen/arch/x86/include/asm/hvm/domain.h
> +++ b/xen/arch/x86/include/asm/hvm/domain.h
> @@ -31,7 +31,9 @@
>  #ifdef CONFIG_MEM_SHARING
>  struct mem_sharing_domain
>  {
> -    bool enabled, block_interrupts;
> +    bool enabled;
> +    bool block_interrupts;
> +    bool skip_special_pages;
>  
>      /*
>       * When releasing shared gfn's in a preemptible manner, recall where
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 15e6a7ed81..84c04ddfa3 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1643,7 +1643,8 @@ static int bring_up_vcpus(struct domain *cd, struct 
> domain *d)
>      return 0;
>  }
>  
> -static int copy_vcpu_settings(struct domain *cd, const struct domain *d)
> +static int copy_vcpu_settings(struct domain *cd, const struct domain *d,
> +                              bool skip_special_pages)
>  {
>      unsigned int i;
>      struct p2m_domain *p2m = p2m_get_hostp2m(cd);
> @@ -1660,7 +1661,7 @@ static int copy_vcpu_settings(struct domain *cd, const 
> struct domain *d)
>  
>          /* Copy & map in the vcpu_info page if the guest uses one */
>          vcpu_info_mfn = d_vcpu->vcpu_info_mfn;
> -        if ( !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
> +        if ( !skip_special_pages && !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
>          {
>              mfn_t new_vcpu_info_mfn = cd_vcpu->vcpu_info_mfn;
>  
> @@ -1807,17 +1808,18 @@ static int copy_special_pages(struct domain *cd, 
> struct domain *d)
>      return 0;
>  }
>  
> -static int copy_settings(struct domain *cd, struct domain *d)
> +static int copy_settings(struct domain *cd, struct domain *d,
> +                         bool skip_special_pages)
>  {
>      int rc;
>  
> -    if ( (rc = copy_vcpu_settings(cd, d)) )
> +    if ( (rc = copy_vcpu_settings(cd, d, skip_special_pages)) )
>          return rc;
>  
>      if ( (rc = hvm_copy_context_and_params(cd, d)) )
>          return rc;
>  
> -    if ( (rc = copy_special_pages(cd, d)) )
> +    if ( !skip_special_pages && (rc = copy_special_pages(cd, d)) )
>          return rc;
>  
>      copy_tsc(cd, d);
> @@ -1826,9 +1828,11 @@ static int copy_settings(struct domain *cd, struct 
> domain *d)
>      return rc;
>  }
>  
> -static int fork(struct domain *cd, struct domain *d)
> +static int fork(struct domain *cd, struct domain *d, uint16_t flags)
>  {
>      int rc = -EBUSY;
> +    bool block_interrupts = flags & XENMEM_FORK_BLOCK_INTERRUPTS;
> +    bool skip_special_pages = flags & XENMEM_FORK_SKIP_SPECIAL_PAGES;
>  
>      if ( !cd->controller_pause_count )
>          return rc;
> @@ -1856,7 +1860,13 @@ static int fork(struct domain *cd, struct domain *d)
>      if ( (rc = bring_up_vcpus(cd, d)) )
>          goto done;
>  
> -    rc = copy_settings(cd, d);
> +    if ( !(rc = copy_settings(cd, d, skip_special_pages)) )

Can you set
cd->arch.hvm.mem_sharing.{block_interrupts,skip_special_pages} earlier
so that you don't need to pass the options around to copy_settings and
copy_vcpu_settings?

> +    {
> +        cd->arch.hvm.mem_sharing.block_interrupts = block_interrupts;
> +        cd->arch.hvm.mem_sharing.skip_special_pages = skip_special_pages;
> +        /* skip mapping the vAPIC page on unpause if skipping special pages 
> */
> +        cd->creation_finished = skip_special_pages;

I think this is dangerous. While it might be true at the moment that
the arch_domain_creation_finished only maps the vAPIC page, there's no
guarantee it couldn't do other stuff in the future that could be
required for the VM to be started.

Does it add much overhead to map the vAPIC page?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.