[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] x86/mem_sharing: make fork_reset more configurable


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 24 Mar 2022 17:43:54 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gGAGIp7T5z4Kf3FAho1Gjz2xraQhXqBhGckrKyWGAMg=; b=aGhTMsUcS+qBiaw+AUBIbMO/BqleiOZGbzb10GfcC3J2vmCWZ+kCrd9UR8dXiudI4W40xKLyGBDq0a0oQc+5deEJZpuCBtIZx2GGDjSBbQ4PeIkzWswGs4iym2VyOPGWt2RWzC927Wk4O4Te57vgc0Qxn8L9fySr9HhN8d8Ahh08w3h/R8kf1fRLpWqgyj4rQA3TX/c5Cu8Bf9bznUCpSCLZpyaP4nooBrCECSNftUf3N58S56KX/fJeKZQKpOIqYSrRE8uR1JoKtX3lSiJBwifmDJ5uuSTuxCHaBCOAYuNL6ctMfNcyEBT6oX5cke8S1aBFxNgP/AzOmP46j9WkxA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IuHlE8JYRJ2buICLzpxxbUqVAz+AwBklsSnEJ0ZjUkFAeQlj7giMjucQMFF0R9m45ZZc4tgbaS2dVOH6p6ejPeAXOCP1FgoiTTkvJQ9tOVU2gE2B2o+nk07+9Gd4u2ZiqCv/cWRMVV2NE9YRqpbf0mnCtz2rgHzTrruQD24VIXnkq6GKUqcaldIuoSfbkOHWPjd6WhfTHeMoQYk5AkQpjxYSY+gk/PZNPHuDkuSaEe/7TIMij5BArL3YuqRsqD4QqY0djIN/cCWxCZBxwrTHWUzyweJqdgbNnnNDJSbEgPat+ftLq7TKjHRb3gtOClb+GOIeWJXHepI+Mr3FPXeBzg==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <JGross@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>, Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 24 Mar 2022 16:44:16 +0000
  • Ironport-data: A9a23:1PEOvKuw11Stm9ZpiOibxcei1+fnVOVeMUV32f8akzHdYApBsoF/q tZmKWCHOfyIMGOjeNggbIS290oPv8TSx9BnGwdk+ys3FChB+JbJXdiXEBz9bniYRiHhoOOLz Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQyw4bVvqYy2YLjW1jX5 YuoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo Oihu6BcRi8KH7edpOU6XCVnKA9RJbVo8ZOae2CW5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvll6yj7UF7A+SI3rSKTW/95Imjw3g6iiGN6AO ZdCOGAwPXwsZTV+KlQdEMgUht6Kh2j5awBitwyJrIoOtj27IAtZj+G2bYu9lsaxbcdImkeVo ErW8mK/BQsVXPS94zeY9nOnhsfUgDj2HokVEdWQ7eV2iVeewmgSDhw+Vlahp/S9zEmkVLp3M 1QV4CEnqaE46WShT8XxUhO1pnKJpFgXXN84O+Q78wiMzqP86QeFCmUKQzhNZcZgv8gzLRQ23 1qAhJXtCDpgobCRYXOb6rqQ6zi1PEA9MWYHbDQsUQgB7t/ssYw3yBnIS75LAKOzy9H4Bzz06 zSLtzQlwaUei9YR0Ke29kyBhCijzrDrYRQy/R7/RX++40VyY4vNT5OswUjW67BHNonxc7Wal CFawY7EtrlIVMzT0nzWKAkQIF23z9WkKmzggHlgI6Ya2Ru141CqUd0K+BgrcS+FLf04UTPuZ UbSvyZY65lSIGamYMdLXm6hNyg55fO+TIq4D5g4evILO8EsL1HfoEmCcGbKhwjQfF4QfbbT0 HtxWeKlFj4kBKtu11JarM9NgOZwlkjSKY4+LK0XLihLM5LDPBZ5qp9faTNii9zVCove/205F P4Fa6O3J+13CrGWX8UtzaYdLEoRMV8wDo3spspce4are1Q6SDl/U6+Kn+t8K+SJepi5cM+Sr hlRvWcClTLCaYDvc13WOhiPlpuxNXqAkZ7LFXN1Zgv5s5TSSY2u8L0eZ/MKkUoPr4ReIQpPZ 6BdIa2oW60XIhyeomh1RcSt/eRKKUXw7SrTbnXNXdTKV8M5LyTT5MTedxfinAFXSHLfWT0W+ Ob7iGs2gPMrGmxfMSohQK7wngnt7ClHxIqfnSLge7FuRakly6AzQwTZhf4rOcAcbxLFwzqRz QGNBhkE4+LKpucIHBPh3Mhoc6/B/zNCI3dn
  • Ironport-hdrordr: A9a23:gNDOkqjI2XPZuaCC0j2lWsu4E3BQXzh13DAbv31ZSRFFG/FwyP rAoB1L73PJYWgqNU3I+ergBEGBKUmskqKdxbNhR4tKOzOWxVdATbsSlrcKpgePJ8SQzJ8+6U 4NSdkaNDS0NykHsS+Y2njILz9D+qj/zEnAv463pB0MPGJXguNbnn9E426gYzNLrWJ9dPwE/f Snl656T23KQwVpUi33PAhMY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzyVxA0XXT9jyaortT GtqX2y2oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq iGnz4Qe+BIr1/BdGC8phXgnyHmzTYV8nfnjWSVhHPyyPaJMw4SOo5kv8Z0YxHZ400vsJVXy6 RQxV+UsJJREFfpgDn9z8KgbWAkqmOE5V4Z1cIDhX1WVoUTLJVLq5YEwU9TGJAcWArn9YEcFv V0Bs203ocbTbqjVQGZgoBT+q3tYpxqdS32AXTq+/blngS+pUoJgXfxn6ck7zU9HJFUcegx2w 2LCNUsqFh0dL5kUUtMPpZwfSKJMB2+ffvtChPlHb21LtBPB5ryw6SHlYndotvaPKA18A==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Mar 24, 2022 at 12:22:49PM -0400, Tamas K Lengyel wrote:
> On Thu, Mar 24, 2022 at 12:04 PM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >
> > On Thu, Mar 24, 2022 at 11:52:38AM -0400, Tamas K Lengyel wrote:
> > > On Thu, Mar 24, 2022 at 11:46 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> > > wrote:
> > > >
> > > > On Tue, Mar 22, 2022 at 01:41:39PM -0400, Tamas K Lengyel wrote:
> > > > > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > > > > index 208d8dcbd9..30ce23c5a7 100644
> > > > > --- a/xen/include/public/memory.h
> > > > > +++ b/xen/include/public/memory.h
> > > > > @@ -541,12 +541,14 @@ struct xen_mem_sharing_op {
> > > > >                  uint32_t gref;     /* IN: gref to debug         */
> > > > >              } u;
> > > > >          } debug;
> > > > > -        struct mem_sharing_op_fork {      /* OP_FORK */
> > > > > +        struct mem_sharing_op_fork {      /* OP_FORK/_RESET */
> > > > >              domid_t parent_domain;        /* IN: parent's domain id 
> > > > > */
> > > > >  /* These flags only makes sense for short-lived forks */
> > > > >  #define XENMEM_FORK_WITH_IOMMU_ALLOWED (1u << 0)
> > > > >  #define XENMEM_FORK_BLOCK_INTERRUPTS   (1u << 1)
> > > > >  #define XENMEM_FORK_SKIP_SPECIAL_PAGES (1u << 2)
> > > > > +#define XENMEM_FORK_RESET_STATE        (1u << 3)
> > > > > +#define XENMEM_FORK_RESET_MEMORY       (1u << 4)
> > > > >              uint16_t flags;               /* IN: optional settings */
> > > > >              uint32_t pad;                 /* Must be set to 0 */
> > > > >          } fork;
> > > > > diff --git a/xen/include/public/vm_event.h 
> > > > > b/xen/include/public/vm_event.h
> > > > > index bb003d21d0..81c2ee28cc 100644
> > > > > --- a/xen/include/public/vm_event.h
> > > > > +++ b/xen/include/public/vm_event.h
> > > > > @@ -127,6 +127,14 @@
> > > > >   * Reset the vmtrace buffer (if vmtrace is enabled)
> > > > >   */
> > > > >  #define VM_EVENT_FLAG_RESET_VMTRACE      (1 << 13)
> > > > > +/*
> > > > > + * Reset the VM state (if VM is fork)
> > > > > + */
> > > > > +#define VM_EVENT_FLAG_RESET_FORK_STATE   (1 << 14)
> > > > > +/*
> > > > > + * Remove unshared entried from physmap (if VM is fork)
> > > > > + */
> > > > > +#define VM_EVENT_FLAG_RESET_FORK_MEMORY  (1 << 15)
> > > >
> > > > I'm confused about why two different interfaces are added to do this
> > > > kind of selective resets, one to vm_event and one to xenmem_fork?
> > > >
> > > > I thin k the natural place for the option to live would be
> > > > XENMEM_FORK?
> > >
> > > Yes, that's the natural place for it. But we are adding it to both for
> > > a reason. In our use-case the reset operation will happen after a
> > > vm_event is received to which we already must send a reply. Setting
> > > the flag on the vm_event reply saves us having to issue an extra memop
> > > hypercall afterwards.
> >
> > Can you do a multicall and batch both operations in a single
> > hypercall?
> >
> > That would seem more natural than adding duplicated interfaces.
> 
> Not in a straight forward way, no. There is no exposed API in libxc to
> do a multicall. Even if that was an option it is still easier for me
> to just flip a bit in the response field than having to construct a
> whole standalone hypercall structure to be sent as part of a
> multicall.

Right, I can see it being easier, but it seems like a bad choice from
an interface PoV. You are the maintainer of both subsystems, but it
would seem to me it's in your best interest to try to keep the
interfaces separated and clean.

Would it be possible for the reset XENMEM_FORK op to have the side
effect of performing what you would instead do with the vm_event
hypercall?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.