[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH-for-4.13 v7] Rationalize max_grant_frames and max_maptrack_frames handling



On Fri, Nov 29, 2019 at 10:15:33PM +0100, Marek Marczykowski-Górecki wrote:
> On Fri, Nov 29, 2019 at 05:44:23PM +0000, Wei Liu wrote:
> > On Fri, Nov 29, 2019 at 05:36:11PM +0000, Wei Liu wrote:
> > > On Fri, Nov 29, 2019 at 05:24:45PM +0000, Paul Durrant wrote:
> > > > From: George Dunlap <george.dunlap@xxxxxxxxxx>
> > > > 
> > > > Xen used to have single, system-wide limits for the number of grant
> > > > frames and maptrack frames a guest was allowed to create. Increasing
> > > > or decreasing this single limit on the Xen command-line would change
> > > > the limit for all guests on the system.
> > > > 
> > > > Later, per-domain limits for these values was created. The system-wide
> > > > limits became strict limits: domains could not be created with higher
> > > > limits, but could be created with lower limits. However, that change
> > > > also introduced a range of different "default" values into various
> > > > places in the toolstack:
> > > > 
> > > > - The python libxc bindings hard-coded these values to 32 and 1024,
> > > >   respectively
> > > > - The libxl default values are 32 and 1024 respectively.
> > > > - xl will use the libxl default for maptrack, but does its own default
> > > >   calculation for grant frames: either 32 or 64, based on the max
> > > >   possible mfn.
> > > > 
> > > > These defaults interact poorly with the hypervisor command-line limit:
> > > > 
> > > > - The hypervisor command-line limit cannot be used to raise the limit
> > > >   for all guests anymore, as the default in the toolstack will
> > > >   effectively override this.
> > > > - If you use the hypervisor command-line limit to *reduce* the limit,
> > > >   then the "default" values generated by the toolstack are too high,
> > > >   and all guest creations will fail.
> > > > 
> > > > In other words, the toolstack defaults require any change to be
> > > > effected by having the admin explicitly specify a new value in every
> > > > guest.
> > > > 
> > > > In order to address this, have grant_table_init treat negative values
> > > > for max_grant_frames and max_maptrack_frames as instructions to use the
> > > > system-wide default, and have all the above toolstacks default to 
> > > > passing
> > > > -1 unless a different value is explicitly configured.
> > > > 
> > > > This restores the old behavior in that changing the hypervisor 
> > > > command-line
> > > > option can change the behavior for all guests, while retaining the 
> > > > ability
> > > > to set per-guest values.  It also removes the bug that reducing the
> > > > system-wide max will cause all domains without explicit limits to fail.
> > > > 
> > > > NOTE: - The Ocaml bindings require the caller to always specify a value,
> > > >         and the code to start a xenstored stubdomain hard-codes these 
> > > > to 4
> > > >         and 128 respectively; this behavour will not be modified.
> > > > 
> > > > Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
> > > > Signed-off-by: Paul Durrant <pdurrant@xxxxxxxxxx>
> > > > Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > > > Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> > > > Release-acked-by: Juergen Gross <jgross@xxxxxxxx>
> > > 
> > > Acked-by: Wei Liu <wl@xxxxxxx>
> > 
> > In theory I should wait for Marek's ack for changes to python binding,
> > but the changes are trivial there so I plan to push this patch later
> > tonight to both staging and staging-4.13 so that it can be tested over
> > the weekend.
> > 
> > Marek, I apologise in advance in case you disagree with my assessment.
> 
> FWIW, for python part:
> Acked-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>

Thanks. I will fold this in.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.