[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 06/10] x86/mem-sharing: copy GADDR based shared guest areas


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 3 Oct 2023 16:29:50 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+t61lgnBvuF4GtqBUIqxWJ8i13nyqtD2oICqzCJ/p0o=; b=AThs9GgqxxuYZWZamC40HGYMzBEKi7f3PGnfYZD92Rh8rJlJYxKBgSjErg6Vn5Z4zqu8NISmCS42Ji0ZHO/SW6C7xfycdBN5cHf1/6ypS5kHY2ux51iDhoahDWvzs2wGHi0rMFj+eRhUS93FRLl0ybKKPwPtzurd2eWVsBJudcm/FAwsAwpYr2tuxCPqMAd28vN+XkZyfiiZsZ9Y8JTDIXwKAhokC5+Atxt6jSW1k0JqYlGo0jInN+GgZu07BiYxFiz2wX/JnZhuMn+yikwidpJtA8pY+Gd+2Mgeo5pxmJo6Npg230GITXXgEDNXvq9efE+ETbaKERK2c8Do+nS8RA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WyNlgygY5S13QpcU+3Fj7aWFq102O5Zc+6738jWV6K4cacYkMqAwK4R2JGU72WgcDOGnsKZHz5k88uwvnnwX4+l6KIYdAofLJ7G8yvoG2ygh1K4BSyTWt5z9oVIxrJszqiAhb5WKK2wSuKxgqOlJOpfEa0MbHqgsvKEJaXLXc7EgRvFKtLnJ3E2OJBaZkces3lzWnjVL6NY40Utp96cGuO19kVqvxhnj4Y+FkTcHSy/ogVyWKIt8Mcs7jkVNBxXX3pUE7f7R9Bb5Kuq1OBE5yapxpB6gsYo5RpFvvm4tMQUu5i/EHganLX489MeAThcw+jk5wmJts9yxtnsspxYQfw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, henry.wang@xxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Tue, 03 Oct 2023 14:30:44 +0000
  • Ironport-data: A9a23:ZxAJmqxKu6eWFm/T0Mx6t+fHxyrEfRIJ4+MujC+fZmUNrF6WrkUDy DYbWm7QPfnYazejLY8iaYS19BsC756EzoNnTAM4qiAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s ppEOrEsCOhuExcwcz/0auCJQUFUjPzOHvykTrecZkidfCc8IA85kxVvhuUltYBhhNm9Emult Mj75sbSIzdJ4RYtWo4vw/zF8EgHUMja4mtC5QVmPasT4DcyqlFOZH4hDfDpR5fHatE88t6SH 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVMT7 udHcBBUUg+Gv7Kr8JCgFMRK38t2eaEHPKtH0p1h5RfwKK9/BLrlE+DN79Ie2yosjMdTG/qYf 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVIhuFTuGIO9ltiiX8Jak1zev mvb12/4HgsbJJqUzj/tHneE37WQx36gB9JJfFG+3sJDiWy16lc+MTEtXwTnptS1oEKRBM0Kf iT4/QJr98De7neDUtD4VgaQvH2AsxgTStdUVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0 V2ElM+vAiZg2JWVRHSH5/GLpDW9ESEPKCkJYipsZQkP7sTnoYozpgnSVdslG6mw5vXqHRngz jbMqzIx750RkMoK2qOT7V3BxTW2qfDhVRUp7w/aWmak6AJRZ4O/YYGsr1/B4p59wJ2xS1CAu D0OnZiY5eVXVJWVznXTEKMKAa2j4OuDPHvEm1lzEpI99jOrvXm+YYRX5zI4L0BsWioZRQLUj IbokVs5zPdu0LGCMfIfj16ZYyjy8ZXdKA==
  • Ironport-hdrordr: A9a23:3e/U+KrQC2KpvAj9OQeC7aEaV5tMLNV00zEX/kB9WHVpm5Oj+v xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5Wo3SJzUO2VHYVb2KiLGP/9SOIU3DH4JmpM Rdmu1FeafN5DtB/LnHCWuDYrEdKbC8mcjH5Ns2jU0dKz2CA5sQkzuRYTzrdnGeKjM2Z6bQQ/ Gnl7d6TnebCD0qhoPRPAh3Y8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzSIwxsEVDtL4LE6tU zIiRbw6KmPu+yyjka07R6f071m3P/ajvdTDs2FjcYYbh3qlwaTfYxkH5GSoTwvp+mryVAy1P 3BuQ0pMchf427YOku1vRzu8Q/91ytG0Q6p9XaoxV/Y5eDpTjMzDMRMwapfbxvi8kIl+PVxyr hC0W61v4deSUqoplW32/H4EzVR0makq3srluAey1RZTIslcbdU6agS5llcHpssFD/zrKonDO 5tJsfB4+s+SyLTU1np+k1UhPC8VHU6GRmLBmAEp8yuyjBT2Et0ykMJrfZv6ksoxdYYcd1p9u 7EOqNnmPVlVckNd59wA+8HXI+eFnHNaQikChPSHX3XUIU8f17doZ/+57s4oMuwfoYT8Zc0kJ PdFHtFqG8JfV70A8Hm5uwEzvn0ehT/Yd3R8LAd23Ag0YeMAYYDcBfzB2zGqvHQ48n2WabgKr KO0JE/OY6XEYKhI/cP4+TEYegjFZAvarxqhj8FYSP+nivqEPycigWJSoekGJPdVRAZZ0jYPl wvGBDOGeQo1DHYZpa/ummcZ0/Q
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Oct 03, 2023 at 09:53:11AM -0400, Tamas K Lengyel wrote:
> On Mon, Oct 2, 2023 at 11:13 AM Roger Pau Monne <roger.pau@xxxxxxxxxx> wrote:
> >
> > From: Jan Beulich <jbeulich@xxxxxxxx>
> >
> > In preparation of the introduction of new vCPU operations allowing to
> > register the respective areas (one of the two is x86-specific) by
> > guest-physical address, add the necessary fork handling (with the
> > backing function yet to be filled in).
> >
> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> > Changes since v4:
> >  - Rely on map_guest_area() to populate the child p2m if necessary.
> > ---
> >  xen/arch/x86/mm/mem_sharing.c | 31 +++++++++++++++++++++++++++++++
> >  xen/common/domain.c           |  7 +++++++
> >  2 files changed, 38 insertions(+)
> >
> > diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> > index 5f8f1fb4d871..99cf001fd70f 100644
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -1641,6 +1641,24 @@ static void copy_vcpu_nonreg_state(struct vcpu 
> > *d_vcpu, struct vcpu *cd_vcpu)
> >      hvm_set_nonreg_state(cd_vcpu, &nrs);
> >  }
> >
> > +static int copy_guest_area(struct guest_area *cd_area,
> > +                           const struct guest_area *d_area,
> > +                           struct vcpu *cd_vcpu,
> > +                           const struct domain *d)
> > +{
> > +    unsigned int offset;
> > +
> > +    /* Check if no area to map, or already mapped. */
> > +    if ( !d_area->pg || cd_area->pg )
> > +        return 0;
> > +
> > +    offset = PAGE_OFFSET(d_area->map);
> > +    return map_guest_area(cd_vcpu, gfn_to_gaddr(
> > +                                       mfn_to_gfn(d, 
> > page_to_mfn(d_area->pg))) +
> > +                                   offset,
> > +                          PAGE_SIZE - offset, cd_area, NULL);
> > +}
> > +
> >  static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
> >  {
> >      struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
> > @@ -1709,6 +1727,16 @@ static int copy_vcpu_settings(struct domain *cd, 
> > const struct domain *d)
> >                  return ret;
> >          }
> >
> > +        /* Same for the (physically registered) runstate and time info 
> > areas. */
> > +        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
> > +                              &d_vcpu->runstate_guest_area, cd_vcpu, d);
> > +        if ( ret )
> > +            return ret;
> > +        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
> > +                              &d_vcpu->arch.time_guest_area, cd_vcpu, d);
> > +        if ( ret )
> > +            return ret;
> > +
> >          ret = copy_vpmu(d_vcpu, cd_vcpu);
> >          if ( ret )
> >              return ret;
> > @@ -1950,7 +1978,10 @@ int mem_sharing_fork_reset(struct domain *d, bool 
> > reset_state,
> >
> >   state:
> >      if ( reset_state )
> > +    {
> >          rc = copy_settings(d, pd);
> > +        /* TBD: What to do here with -ERESTART? */
> 
> There is no situation where we get an -ERESTART here currently. Is
> map_guest_area expected to run into situations where it fails with
> that rc?

Yes, there's a spin_trylock() call that will result in
map_guest_area() returning -ERESTART.

> If yes we might need a lock in place so we can block until it
> can succeed.

I'm not sure whether returning -ERESTART can actually happen in
map_guest_area() for the fork case: the child domain is still paused
at this point, so there can't be concurrent guest hypercalls that
would also cause the domain hypercall_deadlock_mutex to be acquired.

The comment was added by Jan, so I cannot be certain about the
intention, neither I would like to misinterpret his words.  My
understanding is that future uses of copy_settings() might indeed need
to report -ERESTART, and that it would need to be propagated for
proper hypercall continuations at some point.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.