[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 19/20] x86/mem_sharing: reset a fork



> > Well, this is only an experimental system that's completely disabled
> > by default. Making the assumption that people who make use of it will
> > know what they are doing I think is fair.
>
> I assume that if you submit to upstream this new hypercall then there is
> longer plan to have more people to use it and potentially making
> "stable". If not, then it raises the question why this is pushed upstream...

It's being pushed upstream so other people can make use of it, even if
it's not "production quality". Beyond what is being sent here there
are no longer term plans from Intel (at this point) to support this in
any way. The alternative would be that we just release a fork (or just
the patches) and walk away. If the Xen community wants to make the
announcement that only code that will have long term support and is
"stable" is accepted upstream that's IMHO drastically going to reduce
people's interest to share anything.

> In any case, all the known assumptions should be documented so they can
> be fixed rather than forgotten until it is rediscovered via an XSA.

Fair enough.

> >
> >>
> >>> Granted the list can grow larger, but in those cases its likely better
> >>> to just discard the fork and create a new one. So in my opinion adding
> >>> a hypercall continuation to this not needed
> >>
> >> How would the caller know it? What would happen if the caller ends up to
> >> call this with a growing list.
> >
> > The caller knows by virtue of knowing how long the VM was executed
> > for. In the usecase this is targeted at the VM was executing only for
> > a couple seconds at most. Usually much less then that (we get about
> > ~80 resets/s with AFL). During that time its extremely unlikely you
> > get more then a ~100 pages deduplicated (that is, written to). But
> > even if there are more pages, it just means the hypercall might take a
> > bit longer to run for that iteration.
>
> I assume if you upstream the code then you want more people to use it
> (otherwise what's the point?). In this case, you will likely have people
> that heard about the feature, wants to test but don't know the internal.
>
> Such users need to know how this can be call safely without reading the
> implementation. In other words, some documentation for your hypercall is
> needed.

Sure.

>
> > I don't see any issue with not
> > breaking up this hypercall with continuation even under the worst case
> > situation though.
>
> Xen only supports voluntary preemption, this means that an hypercall can
> only be preempted if there is code for it. Otherwise the preemption will
> mostly only happen when returning to the guest.
>
> In other words, the vCPU executing the hypercall may go past its
> timeslice and prevent other vCPU to run.
>
> There are other problem with long running hypercalls. Anyway, in short,
> if you ever want to get you code supported then you will need the
> hypercall to be broken down.
>
> > But if others feel that strongly as well about
> > having to have continuation for this I don't really mind adding it.
>
> I don't think the continuation work is going to be difficult, but if you
> want to delay it, then the minimum is to document such assumptions for
> your users.

I just don't see a use for it because it will never actually execute.
So to me it just looks like unnecessary dead glue. But documenting the
assumption under which this hypercall should execute is perfectly
fair.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.