[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Two shadow page tables for HVM

  • To: Xen Devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Emre Can Sezer <ecsezer@xxxxxxxx>
  • Date: Mon, 29 Dec 2008 11:17:40 -0500
  • Delivery-date: Mon, 29 Dec 2008 08:18:20 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Tim Deegan wrote:

At 12:05 -0500 on 22 Dec (1229947511), Emre Can Sezer wrote:
Wouldn't this mean that the two page tables are NOT synchronized? When we switch paging modes, wouldn't we have to rebuild the entire shadow page tables from guest?

We maintain shadow pagetables for pagetables that are not in use, and
even for modes that aren't in use.  We only get rid of shadows when we're
running out of memory, or when the guest uses the page for sonmething else.
If we didn't do that our context-switch costs would be enormous.
I've been trying to understand the shadow code for a while now and I
have one last question about this approach.  In my case, the guest OS
will have only a single set of page tables and in return I will have two
sets of shadows for them.  I understand that once you change your shadow
mode, the shadow pages are still kept and the mapping is stored in the
shadow cache.  However, if a page table is updated in one mode, how does
the other mode know of this change? As far as I understand, the same gfn
will be inserted to the hash twice with two types.  However, does Xen
determine that the guest page's contents have changed? That change must
somehow propagate to the second shadow mode's page tables with the
appropriate permission changes. How is this being done? How does Xen
determine that the page contents have changed?

Thanks for all the input,

The reason I was thinking of synchronized page tables is because I will have to switch between them quite often - several times during a system call. So I want to minimize the tlb flushes and make the switch as fast as possible. With synced PT's, my plan was to set the guest CR3 to point to the new top level page table and only flush the kernel pages.

That might be just as expensive -- ISTR Keir measured the cost of invlpg
vs TLB flush a while ago and found that invlpg'ing more than one or two
PTEs was slower than just flushing the whole TLB.

When considering the performance penalties of flushing the kernel page tables from the TLB, how significant is traversing all the shadow page tables for the guest kernel and updating their permissions? If there isn't an order of magnitude of difference, it might be reasonable to take the short cut in implementation.

I don't have any measurements for doing walks of the whole set of
shadows, but in general we've found it's worth doing almost any trick
that will avoid that.



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.