[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Re: vram_dirty vs. shadow paging dirty tracking

> > Yep, its been in the roadmap doc for quite a while. However, the log
> > dirty code isn't ideal for this. We'd need to extend it to enable it
> to
> > be turned on for just a subset of the GFN range (we could use a xen
> > rangeset for this).
> >
> Okay, I was curious if the log dirty stuff could do ranges.  I guess
> not.

It could certainly be added, but I prefer the dirty bit solution to this
particular problem. 
> > Even so, I'm not super keen on the idea of tearing down and
> rebuilding
> > 1024 PTE's up to 50 times a second.
> >
> > A lower overhead solution would be to do scanning and resetting of
> the
> > dirty bits on the PTEs (and a global tlb flush).
> Right, this is the approach I was assuming.  There's really no use in
> tearing down the whole PTE (since you would have to take an extraneous
> read fault).
> > In the general case
> > this is tricky as the framebuffer could be mapped by multiple PTEs.
> In
> > practice, I believe this doesn't happen for either Linux or Windows.
> >
> I wouldn't think so, but showing my ignorance for a moment, does
> shadow2 not provide a mechanism to lookup VA's given a GFN?  This
lookup could
> be cheap if the structures are built during shadow page table
> construction.

No, it deliberately doesn't because threading all the PTEs that point to
a GFN can consume quite a bit of memory, introduces locking complexity
that will effect future scalability, and turns out to be completely
unnecessary for normal shadow mode operation because some simple
heuristics get a near-perfect hit rate.

> Sounds like this is a good long term goal but I think I'll stick with
> the threading as an intermediate goal.

Yes, that's more immediately useful, thanks.

> I've got a minor concern that threading isn't going to help us much
> when
> dom0 is UP since the VGA scanning won't happen while an MMIO/PIO
> request happens.  

I think the VGA scanning burns enough CPU to stand a good chance of
getting pre-empted when an MMIO/PIO request arrives. We need to make
sure there's no synchronization required that prevents this.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.