[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Provide support for multiple frame buffers in Xen.



Hi Robert,

I've spent a bit more time digging around and I think I have a better
idea of why you've done things the way you did.  The 'simple' version I
was thinking of doesn't work as well as I thought. :|

At 13:45 -0400 on 22 Oct (1350913547), Robert Phillips wrote:
> I believe the bug was toward the end of the function where it used to
> call clear_page(l1) The function copies bits from l1 into a temporary
> bitmap, then copies them from there to the user-provided dirty_bitmap.
> When it's done, it clears the page at l1.  But two framebuffers might
> cohabit that page, not overlapping but at distinction areas within it.
> Reading the dirtiness for one frame buffer and then clearing the whole
> page wipes out information "owned" by the other frame buffer.  This
> bug would not show up if there is only one frame buffer so your live
> migration code is ok.

Yep, understood.

> And when it's time to look for dirty bits, we know precisely which
> PTEs to look at.  The old code used to scan all page tables
> periodically and we would see a performance hit with precisely that
> periodicity.

Was this caused by the 2-second timeout where it would try to unmap the
vram if it hadn't been dirtied in that time?  Were you finding that it
would unmap and then immediately try to map it again?

> One unfortunate bit of complexity relates to the fact that several
> PTEs can map to the same guest physical page.  We have to bookkeep
> them all

The old code basically relied on this not happening (assuming that
framebuffers would be mapped only once).  Is that assumption just
wrong?  Is it broken by things like DirectX?

> so each PTEs that maps to a guest physical page must be
> represented by its own dv_paddr_link, and, for the set that relate to
> the same guest page, they are all linked together.  The head of the
> linked list is the entry in the range's pl_tab array that corresponds
> to that guest physical page.

Right; you've built a complete reverse mapping from pfn to ptes.

> re: " since we already maintain a sparse bitmap to hold the dirty log"
> I don't believe the dirty log is maintained except when page_mode is set for 
> PG_log_dirty.
> That mode exists for live migrate and the discipline for entering/leaving it 
> is quite different than the finer granularity needed for dirty vram.

Sorry, I had got confused there.  I was thinking that we could move over
more to PG_log_dirty-style operation, where we'd trap on writes and
update the bitmap (since we now keep that bitmap as a trie the
sparseness would be OK).  But on closer inspection the cost of clearing
all the mappings when the bitmap is cleared would be either too much
overhead (throw away _all_ shadows every time) or about as complex as
the mechanism needed to scan the _PAGE_DIRTY bits.

I think I have a better idea of the intention of the patch now; I'll go
over the code in detail today.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.