[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 27 Sep 2023 15:54:19 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WDbyBJpiMd8kNmCykIwb2h2wZOzB8phmMsnkxLLAnUM=; b=dLwMNIcVhoM5D514kuuvnweMWjq0n9o6mZCVIkK0e65/TtV3dH27PWW0DwXoQ+v32etwOo3l8QXbVffCU2thzKOEMiq89CW3MEY7trxYKWzfjE66gJZ6X0mnOf8c68vak+mQ6gCMHs/dmwvkigwlnCesn6axBUGWfFX9W3bXEbpFOunJ4zdxk3AA8sjqOrDWql9uqNr9iJymkVdGdjG75BGIKxkgXPHLr5THzED/Mn6pmmcngJS1X2691zpvNO3T8yQnzXwNpsHZzvc35UG99BK+IJ2qabNum/5S4qKME9cElvrWgBqT8p3hE4XUZ79xLxPxlODRcSo003Lyccz+ng==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g+gl3SKQkkRA+5c4s4EJC2RolY5xQqYc5opD0+SbIdk8SmfSIH+4k9BKqf1Q4JKLFz+enCInkE7IZrELx1K/6pD/Z8i1r0PLHL+Dv+9Vb9ib85IwbiVRLZXyWRwZ+c/75hzIdVJ7NxeO6+o8y0n1gMzJ/UuIIgfo59wmXcjWzIq8+pVDYM7DMULRogY2QHCgD5zQbwR9tOrjzsfsNsiI8nNB9xfKHkdwfhw+ERGHh8I3vdHn+1HZnSmAL7wp32/Jx+mimiaYKccF1NA734c3oMO2CCoF3Mq6T65GPQiR5FVVhAqQ2Bn5/unbFmQ6jdkbXcLZv7V0gVqOqEUQJGftQQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 27 Sep 2023 13:54:54 +0000
  • Ironport-data: A9a23:RkO4nK/VdOTcrMHQoOubDrUDWn+TJUtcMsCJ2f8bNWPcYEJGY0x3z WNMWD2EMvnZYzT0ct8laIji/R8BupSGzNM2SgA6qig8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s 5hBMImowOQcFCK0SsKFa+C5xZVE/fjVAOK6UKidYnwZqTZMEE8JkQhkl/MynrlmiN24BxLlk d7pqojUNUTNNwRcawr40Ird7ks01BjOkGlA5AdmNKoU5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkf1 NcEJSAiNiqvjtj177CHbMA83988eZyD0IM34hmMzBn/JNN+G9X4ZfyP4tVVmjAtmspJAPDSI dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWMilApuFTuGIO9ltiiX8Jak1zev mvb12/4HgsbJJqUzj/tHneE37WVwHKiANtMfFG+3vlbgkCc9EJIMSMHZwee4sCFj36jYc0Kf iT4/QJr98De7neDUtD4VgaQvH2AsxgTStdUVeY97WmlyKDZ/gKYDWgsVSNaZZots8pebSwn0 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsTxYB4tTliJE+iFTIVNkLOLWuktT/FDX0w jaLhCsznbMeiYgMzarT1U/DqyKhoN7OVAFd2+nMdmes7wc8f4j8YYWtsQLf9awYcN7fSUSdt n8ZncTY9PoJEZyGiC2KRqMKAa2t4PGGdjbbhDaDAqUcythkwFb7Fag43d20DB4B3hosEdMxX HLuhA==
  • Ironport-hdrordr: A9a23:fNKeLaF6X/vWoWGMpLqE68eALOsnbusQ8zAXPo5KOHtom62j5q aTdZEgvyMc5wxhO03I9erhBEDiexLhHPxOkOss1N6ZNWGMhILCFvAG0WKN+UyFJ8Q8zIJgPG VbHpSWxOeeMbGyt6jH3DU=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Sep 27, 2023 at 07:43:26AM -0400, Tamas K Lengyel wrote:
> On Wed, Sep 27, 2023 at 7:08 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >
> > On Wed, May 03, 2023 at 05:56:46PM +0200, Jan Beulich wrote:
> > > In preparation of the introduction of new vCPU operations allowing to
> > > register the respective areas (one of the two is x86-specific) by
> > > guest-physical address, add the necessary fork handling (with the
> > > backing function yet to be filled in).
> > >
> > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >
> > Given the very limited and specific usage of the current Xen forking
> > code, do we really need to bother about copying such areas?
> >
> > IOW: I doubt that not updating the runstate/time areas will make any
> > difference to the forking code ATM.
> 
> The current implementation of forking allows for fully functional VM
> forks to be setup, including launching the dm for them. The toolstack
> side hasn't been merged for that into Xen but that doesn't mean it's
> not available or used already. So any internal changes to Xen that
> create guest-states that could potentially be interacted with and
> relied on by a guest should get properly wired in for forking. So I'm
> happy Jan took the initiative here for wiring this up, even if the
> use-case seems niche.

But there are still some bits missing in Xen, seeing the comment in
copy_vcpu_settings().  If we don't copy the timers already then I'm
unsure whether copying the runstate/time shared areas is very
relevant.

> >
> > > ---
> > > v3: Extend comment.
> > >
> > > --- a/xen/arch/x86/mm/mem_sharing.c
> > > +++ b/xen/arch/x86/mm/mem_sharing.c
> > > @@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc
> > >      hvm_set_nonreg_state(cd_vcpu, &nrs);
> > >  }
> > >
> > > +static int copy_guest_area(struct guest_area *cd_area,
> > > +                           const struct guest_area *d_area,
> > > +                           struct vcpu *cd_vcpu,
> > > +                           const struct domain *d)
> > > +{
> > > +    mfn_t d_mfn, cd_mfn;
> > > +
> > > +    if ( !d_area->pg )
> > > +        return 0;
> > > +
> > > +    d_mfn = page_to_mfn(d_area->pg);
> > > +
> > > +    /* Allocate & map a page for the area if it hasn't been already. */
> > > +    if ( !cd_area->pg )
> > > +    {
> > > +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> > > +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
> > > +        p2m_type_t p2mt;
> > > +        p2m_access_t p2ma;
> > > +        unsigned int offset;
> > > +        int ret;
> > > +
> > > +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
> > > +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> > > +        {
> > > +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 
> > > 0);
> > > +
> > > +            if ( !pg )
> > > +                return -ENOMEM;
> > > +
> > > +            cd_mfn = page_to_mfn(pg);
> > > +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> > > +
> > > +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, 
> > > p2m_ram_rw,
> > > +                                 p2m->default_access, -1);
> > > +            if ( ret )
> > > +                return ret;
> > > +        }
> > > +        else if ( p2mt != p2m_ram_rw )
> > > +            return -EBUSY;
> >
> > Shouldn't the populate of the underlying gfn in the fork case
> > be done by map_guest_area itself?
> 
> Would seem more logical, I agree, but then the call should be
> map_guest_area first, which conditionally calls down into a
> mem_sharing_copy_guest_area if the domain is a fork.
> 
> >
> > What if a forked guest attempts to register a new runstate/time info
> > against an address not yet populated?
> 
> Unpopulated memory will get forked from the parent for all access
> paths currently in existence, including access to a forked VMs memory
> from dom0/dm and the various internal Xen access paths. If the memory
> range is not mapped in the parent registering on that range ought to
> fail by I assume existing checks that validate that the memory is
> present during registration.

What I'm trying to say is that map_guest_area() already has to deal
with forked guests, and hence the populating of the gfn shouldn't be
needed as map_guest_area() must know how to deal with such guest
anyway.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.