[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 5/9] x86/mem-sharing: copy GADDR based shared guest areas


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 28 Sep 2023 16:08:22 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HQ09ZLM2bWVt/zqEhBwLTLGEgTmQ6io+0t023Q+v9ZU=; b=P5koBy62Jf//Y/G8j2X878NFRW6jntTmsz3rW4UEjZXprFSpiVrunvC+OcnqJdsmoSVtCQfzTR2uZRuCzV9zOoMbWagn0yJtyvT/5a3ckf42AyzmtOdeNj40GX2dU5g5rEqSmg+Odhs00X1Ne9jooI3cpZds75jB+193sAvB1rJP8ribnB3IfLJWa3XzqDZ3hfdxXY497avxpofsrj4N+eu/frsoHcZP/K8gtREryczGv8FIeTGcasaJO5cWdGe7hKsBpRJC5lbrbqlSxiurVO394s6PZ4EbP/CK20NIV+SevKyfhW13L8Miv6duDN7qSoZ1EdcWl5nfKNbz7jDG6Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nM/BOliumtzQoSicvnXC+BScUZM/8oukFahXJgGnllnk3UReHovoTr7EMMBOTpj7T/WUzF/L7RRtdJRr/l09JTHab15+dlZeagwxJtNwfH2L1oonqBmks3Lkau6j25mR01J9FFH6g/QF4D47vPLvqzNre3Dy8aer2EpxUIKD3O0AXgEJ1RZa/9Vgp+5IoQWrlex5qtkoCplYM//h5KddfHZOkgO4klOSCeppMWyg2q30ODm72ubUhPZFxrQ+poVieJYC/Y3GfO3THYTHmBjJz+EzW3X2iNnxGM/DP/n+CdFljc4J/KDeQHJmACN7DSJ9P7mWozTO3qoK4ztgY7n2+w==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Henry Wang <Henry.Wang@xxxxxxx>
  • Delivery-date: Thu, 28 Sep 2023 14:08:49 +0000
  • Ironport-data: A9a23:pwp2MK3zNZyUXpmhHvbD5SVwkn2cJEfYwER7XKvMYLTBsI5bpzRSm GoZDWGPPKqNNmHxKIh1b9m1ox4AvMPUzNFqHgZlpC1hF35El5HIVI+TRqvS04F+DeWYFR46s J9OAjXkBJppJpMJjk71atANlVEliOfQAOK6UbaYUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo tq3qMDEULOf82cc3lk8teTb83uDgNyo4GlD5g1kNagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfX3kS1 NEGOC4xflOiqP6/weKkcfFAiZF2RCXrFNt3VnBI6xj8VKxjZK+ZBqLA6JlfwSs6gd1IEbDGf c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxpvS6PlGSd05C0WDbRUsaNSshP2F6Ru 0rN/njjAwFcP9uaodaA2iv23bSUx32iAOr+EpWc7vo231OjxlAWLyxOBXqVmf6Tl3eHDoc3x 0s8v3BGQbIJ3FCiS9DmdwG7pHOCsQ8RX5xbFOhSwAOHx7fQ4g2ZLnMZVTMHY9sj3OcmSDpv2 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTS0ZSQoI5fHzrYd1iQjAJv54C7K8hNDxHTD2w hiJoTI4irFVitQEv42k+XjXjjTqoYLGJiYV6wPNTySa5wV2TIe/Ysqj7l2z0BpbBIOQT13Eu WdencGbtboKFcvVyHTLR/gRFra04frDKCfbnVNkA5gm8XKq5mKneodTpjp5IS+FL/o5RNMgW 2eL0Ss52XOZFCHCgXNfC25pN/kX8A==
  • Ironport-hdrordr: A9a23:k6rFTKrWQ0Q+SBFIokEMlu8aV5oJeYIsimQD101hICG9E/bo8v xG+c5wuCMc5wx8ZJhNo7+90dC7MBThHP1OkOss1NWZPDUO0VHARL2Ki7GN/9SKIVycygcy78 Zdmp9FebnN5AhB5voSODPIaerIGuP3iJxAWN2uqUuFkTsaEJ2IMT0JdzpyfSVNNXB7OaY=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Sep 28, 2023 at 08:57:04AM -0400, Tamas K Lengyel wrote:
> On Thu, Sep 28, 2023 at 7:08 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >
> > On Thu, Sep 28, 2023 at 12:11:02PM +0200, Jan Beulich wrote:
> > > On 28.09.2023 11:51, Roger Pau Monné wrote:
> > > > On Thu, Sep 28, 2023 at 09:16:20AM +0200, Jan Beulich wrote:
> > > >> --- a/xen/arch/x86/mm/mem_sharing.c
> > > >> +++ b/xen/arch/x86/mm/mem_sharing.c
> > > >> @@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc
> > > >>      hvm_set_nonreg_state(cd_vcpu, &nrs);
> > > >>  }
> > > >>
> > > >> +static int copy_guest_area(struct guest_area *cd_area,
> > > >> +                           const struct guest_area *d_area,
> > > >> +                           struct vcpu *cd_vcpu,
> > > >> +                           const struct domain *d)
> > > >> +{
> > > >> +    mfn_t d_mfn, cd_mfn;
> > > >> +
> > > >> +    if ( !d_area->pg )
> > > >> +        return 0;
> > > >> +
> > > >> +    d_mfn = page_to_mfn(d_area->pg);
> > > >> +
> > > >> +    /* Allocate & map a page for the area if it hasn't been already. 
> > > >> */
> > > >> +    if ( !cd_area->pg )
> > > >> +    {
> > > >> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
> > > >> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
> > > >> +        p2m_type_t p2mt;
> > > >> +        p2m_access_t p2ma;
> > > >> +        unsigned int offset;
> > > >> +        int ret;
> > > >> +
> > > >> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, 
> > > >> NULL);
> > > >> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
> > > >> +        {
> > > >> +            struct page_info *pg = 
> > > >> alloc_domheap_page(cd_vcpu->domain, 0);
> > > >> +
> > > >> +            if ( !pg )
> > > >> +                return -ENOMEM;
> > > >> +
> > > >> +            cd_mfn = page_to_mfn(pg);
> > > >> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
> > > >> +
> > > >> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, 
> > > >> p2m_ram_rw,
> > > >> +                                 p2m->default_access, -1);
> > > >> +            if ( ret )
> > > >> +                return ret;
> > > >> +        }
> > > >
> > > > I'm still unsure why map_guest_area() shouldn't be able to deal with a
> > > > forked child needing the page to be mapped.  What happens when a
> > > > forked child executes the hypercall to map such areas against not yet
> > > > populated gfns?
> > > >
> > > > Shouldn't map_guest_area() be capable of handling those calls and
> > > > populating on demand?
> > >
> > > Funny you should use this wording: P2M_ALLOC will deal with populating
> > > PoD slots, yes, but aiui forks don't fill their p2m with all PoD slots,
> > > but rather leave them empty. Yet again I need to leave it to Tamas to
> > > confirm or prove me wrong.
> >
> > If the child p2m populating is only triggered by guest accesses then a
> > lot of hypercalls are likely to not work correctly on childs.
> 
> That's not what's happening. As I said before, ALL access paths, be
> that from the guest, dom0 or Xen trigger page population. If there is
> a hole and P2M_ALLOC is set we do the following in
> p2m_get_gfn_type_access:
> 
>     /* Check if we need to fork the page */
>     if ( (q & P2M_ALLOC) && p2m_is_hole(*t) &&
>          !mem_sharing_fork_page(p2m->domain, gfn, q & P2M_UNSHARE) )
>         mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
> 
> Depending on the type of access we either populate the hole with a
> shared memory entry or a copy.

Then the code here is redundant, as the call to get_page_from_gfn(...,
P2M_UNSHARE) in map_vcpu_info() will already take care of populating
the child p2m and copy the parents page contents?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.