[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] compat mode argument translation area


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Mon, 04 Feb 2013 17:50:02 +0000
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Mon, 04 Feb 2013 17:51:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac4DAA6XBbBj6IYIuk2DIs/W9ryVaA==
  • Thread-topic: compat mode argument translation area

On 04/02/2013 16:50, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> Hi Keir,
> 
> this, originally having been at a fixed location outside of Xen virtual
> address ranges, has seen a number of changes over the years, with
> the net effect that right now we're requiring an order-1 allocation
> from the Xen heap. Obviously it would be much better if this got
> populated with order-0 allocations from the domain heap.
> 
> Considering that there's going to be one such area per vCPU (less
> those vCPU-s that don't need translation, i.e. 64-bit PV ones), it
> seems undesirable to me to use vmap() for this purpose.
> 
> Instead I wonder whether we shouldn't go back to putting this at
> a fixed address (5Gb or 8Gb) at least for PV guests, thus reducing
> the virtual address range pressure (compared to the vmap()
> approach as well as for the case that these areas might need
> extending). Was there any other reason that you moved them out
> of such a fixed area than wanting to use mostly the same code
> for PV and HVM (which ought to be possible now that there's a
> base pointer stored for each vCPU)?

The original reason was so that we only needed to allocate memory for the
xlat_area per physical cpu.

Because of allowing sleeping in a hypercall (via a waitqueue) we can no
longer do that anyway, so we are back to allocating an xlat_area for every
vcpu. And we could as well map that at a fixed virtual address I suppose.

Do we care about vmap() pressure though? Is there a downside to making the
vmap area as big as we like? I mean even the existing 16GB area is good for
a million vcpus or so ;)

 -- Keir

> An alternative might be to use another per-domain L3 slot for this.
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.