[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.

Paul Durrant writes ("RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter 
> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > I wouldn't be happy with that (and I've said so before), since it
> > would allow all VM this extra resource consumption.
> The ball is back in Ian's court then.

Sorry to be vague, but: I'm not definitely objecting to some toolstack
parameter.  I'm trying to figure out whether this parameter, in this
form, with this documentation, makes some kind of sense.

In the most recent proposed patch the docs basically say (to most
users) "there is this parameter, but it is very complicated, so do not
set it.  We already have a lot of these kind of parameters.  As a
general rule they are OK if it is really the case that the parameter
should be ignored.  I am happy to have a whole lot of strange
parameters that the user can ignore.

But as far as I can tell from this conversation, users are going to
need to set this parameter in normal operation in some

I would ideally like to avoid a situation where (i) the Xen docs say
"do not set this parameter because it is confusing" but (ii) other
less authoritative sources (wiki pages, or mailing list threads, etc.)
say "oh yes just set this weird parameter to 8192 for no readily
comprehensible reason".

I say `some configurations' because, I'm afraid, most of the
conversation about hypervisor internals has gone over my head.  Let me
try to summarise (correct me if I am wrong):

 * There are some hypervisor tracking resources associated with each
   emulated MMIO range.

   (Do we mean the memory ranges that are configured in the hypervisor
   to be sent to an ioemu via the ioreq protocol - ie, the system
   which is normally used in HVM domains to interface to the device

   (Are these ranges denominated in guest-physical space?)

 * For almost all domains the set of such MMIO ranges is small or very

 * Such ranges are sometimes created by, or specified by, the guest.

   (I don't understand why this should be the case but perhaps this is
   an inherent aspect of the design of this new feature.)

 * If too many such ranges were created by the guest the guest could
   consume excessive hypervisor memory.

 * Therefore normally the number of such ranges per guest is (or
   should be) limited to a small number.

 * With `Intel GVT-g broadwell platform' and `vGPU in GVT-g' or
   `GVT-d' it may be necessary for functionality to allow a larger
   number of such ranges.

But to be able to know what the right interface is for the system
administrator (and what to write in the docs), I need know:

 * Is it possible for libxl to somehow tell from the rest of the
   configuration that this larger limit should be applied ?

 * In the configurations where a larger number is needed, what larger
   limit is appropriate ?  How should it be calculated ?

 * How do we know that this does not itself give an opportunity for
   hypervisor resource exhaustion attacks by guests ?  (Note: if it
   _does_ give such an opportunity, this should be mentioned more
   clearly in the documentation.)

 * If we are talking about mmio ranges for ioreq servers, why do
   guests which do not use this feature have the ability to create
   them at all ?

(A background problem I have is that this thread is full of product
name jargon and assumes a lot of background knowledge of the
implementation of these features - background knowledge which I lack
and which isn't in these patches.  If someone could point me at a
quick summary of what `GVT-g' and `GVT-d' are that might help.)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.