[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 27 Sep 2023 13:08:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=j+ZR/0E4qwmZNPPg8n8gf8LwEnSkB4rEcp2LSW9abDg=; b=TshSEMQ9cgHOHgfI32suyJovjVao3a10ep5WdeT2NW7Zmy44z9oAERpVW3Ad0D/ovGad6fG2I7YJd4vQu3I5438Knpb88R+BPSmZ+3JQgiAX1H3aP7P7cVGUzdRj7Lnq10QhP9gJx+Pj2z2OD3VzVto+9eDEYdQVrICilfchCiXePb9Jvn+lcFBVLZMKYWkPva5nhe8ubRtZKWUR5u0g9yZx0YQjirRCGkqaRHFHAoI0tINNRlO4TjNSP534y7HL8HOrzciW0eX1+roWZrtxUWCmCjU9TaUuO9XvcfRCla6APDmuadqvBn4s6DMAesMwWZYXcMWnZqOcuUQjZH4yxA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=inCnb4Gr2TNvH0rL6Uz571/l8+dStlzuUYvjregxJwOTsX2WhWCfFsOHJzr4pJBwN9P2S4trx03NNTzn2AjX7V/NukWLyOENMj1fKaLOmQCraW5IfUSVguyKWa4bsffPVGCc3Hb8tAqCuZn0My8kdfnAGH9/XdenetY4NupZoboGlavwHELpOdzE3aM3j1IaVcWazYOR7uWw4X9T0qCVo1I1lAGJnqBLgRerVlKEVk4rzPjzhM37LlO6s3Cr72/7gIg2z7uFY31bYL2+fnHLOIwjafQJmMx/qAE3Ti0/uUiyeydx1v1wL4qZMbwI5cTVOSunC1vZffww4tqUvnu7HA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • Delivery-date: Wed, 27 Sep 2023 11:08:40 +0000
  • Ironport-data: A9a23:KP7JQaqVEp3sN8ShRkJUK3uwHsReBmL9ZBIvgKrLsJaIsI4StFCzt garIBnVPfzbYGr8LYwgPoiz80JSvZbVzoRjSgo9rHw9En4W8puZCYyVIHmrMnLJJKUvbq7FA +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbOCYmYpA1Y8FE/NsDo788YhmIlknNOlNA2Ev NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GhwUmAWP6gR5wePzSZNVfrzGInqR5fGatgMdgKFb 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay RAXABYkcz6cldqK/J2AFbFHpMUhdtfiAYxK7xmMzRmBZRonabbqZvyToPR/hXI3jM0IGuvCb c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jearaYWMEjCJbZw9ckKwv GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdhOT+Pjp6Yw6LGV7lMPIy0zcUTkmuikqmq1BIlcJ FwyxRN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62 1mRhdTrCDdz9rqPRhq19KqQrD60ETgYKykFfyBsZRAe/9DprYU3jxTOZtVuCqi4ipvyAz6Y6 y+OhDgzgfMUl8Fj/7my+Fncqy6vopXPQR844kPcWWfN0+9iTIusZojt4l+C6/9Fdd+dVgPY4 yFCnNWC5ucTC53LjDaKXOgGALCu4bCCLSHYhllsWZIm8lxB5kKeQGyZ2xkmTG8BDyrOUWWBj JP70e+J2KJuAQ==
  • Ironport-hdrordr: A9a23:mXwVjqqCuNgAXS8th73rWEsaV5oReYIsimQD101hICG9E/bo9P xG+c5x6faaslossR0b9uxoW5PhfZq/z/BICOAqVN/JMTUO01HIEGgN1/qE/xTQXwH46+5Bxe NBXsFFebvN5IFB/KPHCd+DYrIdKXK8gcKVuds=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, May 03, 2023 at 05:56:46PM +0200, Jan Beulich wrote:
> In preparation of the introduction of new vCPU operations allowing to
> register the respective areas (one of the two is x86-specific) by
> guest-physical address, add the necessary fork handling (with the
> backing function yet to be filled in).
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Given the very limited and specific usage of the current Xen forking
code, do we really need to bother about copying such areas?

IOW: I doubt that not updating the runstate/time areas will make any
difference to the forking code ATM.

> ---
> v3: Extend comment.
> 
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc
>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>  }
>  
> +static int copy_guest_area(struct guest_area *cd_area,
> +                           const struct guest_area *d_area,
> +                           struct vcpu *cd_vcpu,
> +                           const struct domain *d)
> +{
> +    mfn_t d_mfn, cd_mfn;
> +
> +    if ( !d_area->pg )
> +        return 0;
> +
> +    d_mfn = page_to_mfn(d_area->pg);
> +
> +    /* Allocate & map a page for the area if it hasn't been already. */
> +    if ( !cd_area->pg )
> +    {
> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
> +        p2m_type_t p2mt;
> +        p2m_access_t p2ma;
> +        unsigned int offset;
> +        int ret;
> +
> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> +        {
> +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 0);
> +
> +            if ( !pg )
> +                return -ENOMEM;
> +
> +            cd_mfn = page_to_mfn(pg);
> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> +
> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, p2m_ram_rw,
> +                                 p2m->default_access, -1);
> +            if ( ret )
> +                return ret;
> +        }
> +        else if ( p2mt != p2m_ram_rw )
> +            return -EBUSY;

Shouldn't the populate of the underlying gfn in the fork case
be done by map_guest_area itself?

What if a forked guest attempts to register a new runstate/time info
against an address not yet populated?

> +        /*
> +         * Map the into the guest. For simplicity specify the entire range up
> +         * to the end of the page: All the function uses it for is to check
> +         * that the range doesn't cross page boundaries. Having the area 
> mapped
> +         * in the original domain implies that it fits there and therefore 
> will
> +         * also fit in the clone.
> +         */
> +        offset = PAGE_OFFSET(d_area->map);
> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
> +                             PAGE_SIZE - offset, cd_area, NULL);
> +        if ( ret )
> +            return ret;
> +    }
> +    else
> +        cd_mfn = page_to_mfn(cd_area->pg);
> +
> +    copy_domain_page(cd_mfn, d_mfn);
> +
> +    return 0;
> +}
> +
>  static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
>  {
>      struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
> @@ -1733,6 +1795,16 @@ static int copy_vcpu_settings(struct dom
>              copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
>          }
>  
> +        /* Same for the (physically registered) runstate and time info 
> areas. */
> +        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
> +                              &d_vcpu->runstate_guest_area, cd_vcpu, d);
> +        if ( ret )
> +            return ret;
> +        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
> +                              &d_vcpu->arch.time_guest_area, cd_vcpu, d);
> +        if ( ret )
> +            return ret;
> +
>          ret = copy_vpmu(d_vcpu, cd_vcpu);
>          if ( ret )
>              return ret;
> @@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(struct domain
>  
>   state:
>      if ( reset_state )
> +    {
>          rc = copy_settings(d, pd);
> +        /* TBD: What to do here with -ERESTART? */
> +    }
>  
>      domain_unpause(d);
>  
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -1572,6 +1572,13 @@ void unmap_vcpu_info(struct vcpu *v)
>      put_page_and_type(mfn_to_page(mfn));
>  }
>  
> +int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
> +                   struct guest_area *area,
> +                   void (*populate)(void *dst, struct vcpu *v))

Oh, the prototype for this is added in patch 1, almost missed it.

Why not also add this dummy implementation in patch 1 then?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.