[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter 
max_wp_ram_ranges."):
> On 04.02.16 at 10:38, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> > So another question is, if value of this limit really matters, will a
> > lower one be more acceptable(the current 256 being not enough)?
> 
> If you've carefully read George's replies, [...]

Thanks to George for the very clear explanation, and also to him for
an illuminating in-person discussion.

It is disturbing that as a result of me as a tools maintainer asking
questions about what seems to me to be a troublesome a user-visible
control setting in libxl, we are now apparently revisiting lower
layers of the hypervisor design, which have already been committed.

While I find George's line of argument convincing, neither I nor
George are maintainers of the relevant hypervisor code.  I am not
going to insist that anything in the hypervisor is done different and
am not trying to use my tools maintainer position to that end.

Clearly there has been a failure of our workflow to consider and
review everything properly together.  But given where we are now, I
think that this discussion about hypervisor internals is probably a
distraction.


Let pose again some questions that I still don't have clear answers
to:

 * Is it possible for libxl to somehow tell from the rest of the
   configuration that this larger limit should be applied ?

   AFAICT there is nothing in libxl directly involving vgpu.  How can
   libxl be used to create a guest with vgpu enabled ?  I had thought
   that this was done merely with the existing PCI passthrough
   configuration, but it now seems that somehow a second device model
   would have to be started.  libxl doesn't have code to do that.

 * In the configurations where a larger number is needed, what larger
   limit is appropriate ?  How should it be calculated ?

   AFAICT from the discussion, 8192 is a reasonable bet.  Is everyone
   happy with it.

Ian.

PS: Earlier I asked:

 * How do we know that this does not itself give an opportunity for
   hypervisor resource exhaustion attacks by guests ?  (Note: if it
   _does_ give such an opportunity, this should be mentioned more
   clearly in the documentation.)

 * If we are talking about mmio ranges for ioreq servers, why do
   guests which do not use this feature have the ability to create
   them at all ?

I now understand that these mmio ranges are created by the device
model.  Of course the device model needs to be able to create mmio
ranges for the guest.  And since they consume hypervisor resources,
the number of these must be limited (device models not necessarily
being trusted).

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.