[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] question about xen virtual base address



On Mon, 2008-03-10 at 09:26 -0700, Agarwal, Lomesh wrote:
> If there are more than one PV guests then how does Xen share its virtual
> address with all of the PV guests?
> BTW how does this sharing translate to performance gain?

You could -- in theory -- put the entire VMM or at least a large part of
it into a separate virtual address space and switch address spaces upon
each entry and return. That avoids 'address space compression', but it's
very costly, especially since a TLB flush would be required. 

So instead one lets a process, its guest kernel and the VMM share one
virtual address space. As with regular OSes: one address space per guest
process. ïControl transfers between process, kernel and VMM does not
switch between virtual memories, but only privileges, stacks, and EIP.
-> much faster.

The top of the virtual address range dedicated to the VMM is the  same,
typically mapped by the same PT set. Likewise, all processes in a guest
system (whether guest or native) share the same kernel range. Only the
process part is unique. The address space only has to change when
switching guests or processes within those guests.

That's not the whole story regarding memory virtualization, but about
the general idea.

> -----Original Message-----
> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx] 
> Sent: Sunday, March 09, 2008 8:31 AM
> To: Agarwal, Lomesh; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] question about xen virtual base address
> 
> Answer to both questions is that we want to keep out of the way of
> paravirtual guest OS addressing. Guests want to use virtual addresses
> from
> 0x0, so Xen has to be raised up out of the way. Similarly, guests may
> expect
> to use GDT entries starting from entry 0 upwards, and hence Xen gets
> pushed
> up to the last two pages of a full-size GDT. Both of these shifts are
> required because Xen shares its own virtual-memory structures (GDT, page
> tables) with the guest, for efficient switching between guest context
> and
> hypervisor context.
> 
>  -- Keir
> 
> On 8/3/08 23:07, "Agarwal, Lomesh" <lomesh.agarwal@xxxxxxxxx> wrote:
> 
> > I have two questions regarding x86_64 xen boot code -
> > 1. It looks like Xen base virtual address is 0xFFFF830000000000.
> That's
> > why Page table needs to have mirror mapping for lower and higher
> virtual
> > address. If the base virtual address would have been 0
> (__PAGE_OFFSET),
> > code in file x86_64.S would have been much easy to understand and
> > maintain. So, is there a specific reason to choose this high virtual
> > address?
> > 2. Why do we need to subtract FIRST_RESERVED_GDT_BYTE (14 pages) from
> > address of gdt_table when calculating the base address for GDT table?
> > How does this subtraction give the right address for GDT table?
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
-- 
Daniel Stodden
LRR     -      Lehrstuhl fÃr Rechnertechnik und Rechnerorganisation
Institut fÃr Informatik der TU MÃnchen             D-85748 Garching
http://www.lrr.in.tum.de/~stodden         mailto:stodden@xxxxxxxxxx
PGP Fingerprint: F5A4 1575 4C56 E26A 0B33  3D80 457E 82AE B0D8 735B



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.