[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: vram_dirty vs. shadow paging dirty tracking


  • To: "Anthony Liguori" <aliguori@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>
  • Date: Tue, 13 Mar 2007 21:02:23 -0000
  • Delivery-date: Tue, 13 Mar 2007 14:05:18 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcdlpnoXH6PGBcUCTMy5+C0qlQcRlwACtz7A
  • Thread-topic: vram_dirty vs. shadow paging dirty tracking

> When thinking about multithreading the device model, it occurred to me
> that it's a little odd that we're doing a memcmp to determine which
> portions of the VRAM has changed.  Couldn't we just use dirty page
> tracking in the shadow paging code?  That should significantly lower
> the
> overhead of this plus I believe the infrastructure is already mostly
> there in the shadow2 code.

Yep, its been in the roadmap doc for quite a while. However, the log
dirty code isn't ideal for this. We'd need to extend it to enable it to
be turned on for just a subset of the GFN range (we could use a xen
rangeset for this).

Even so, I'm not super keen on the idea of tearing down and rebuilding
1024 PTE's up to 50 times a second. 

A lower overhead solution would be to do scanning and resetting of the
dirty bits on the PTEs (and a global tlb flush). In the general case
this is tricky as the framebuffer could be mapped by multiple PTEs. In
practice, I believe this doesn't happen for either Linux or Windows.
There's always a good fallback of just returning 'all dirty' if the
heuristic is violated. Would be good to knock this up.

Best,
Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.