[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 19/20] x86/mem_sharing: reset a fork



On Thu, Dec 19, 2019 at 4:05 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
> On 19.12.2019 01:15, Tamas K Lengyel wrote:
> > On Wed, Dec 18, 2019 at 4:02 PM Julien Grall <julien@xxxxxxx> wrote:
> >> On 18/12/2019 22:33, Tamas K Lengyel wrote:
> >>> On Wed, Dec 18, 2019 at 3:00 PM Julien Grall <julien@xxxxxxx> wrote:
> >>>> You also have multiple loop on the page_list in this function. Given the
> >>>> number of page_list can be quite big, this is a call for hogging the
> >>>> pCPU and an RCU lock on the domain vCPU running this call.
> >>>
> >>> There is just one loop over page_list itself, the second loop is on
> >>> the internal list that is being built here which will be a subset. The
> >>> list itself in fact should be small (in our tests usually <100).
> >>
> >> For a first, nothing in this function tells me that there will be only
> >> 100 pages. But then, I don't think this is right to implement your
> >> hypercall based only the  "normal" scenario. You should also think about
> >> the "worst" case scenario.
> >>
> >> In this case the worst case scenario is have hundreds of page in page_list.
> >
> > Well, this is only an experimental system that's completely disabled
> > by default. Making the assumption that people who make use of it will
> > know what they are doing I think is fair.
>
> FWIW I'm with Julien here: The preferred course of action is to make
> the operation safe against abuse. The minimum requirement is to
> document obvious missing pieces for this to become supported code.

That's perfectly fine by me.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.