[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] tools: fix build after recent xenpaging changes
On Fri, Jun 24, Ian Campbell wrote: > On Fri, 2011-06-24 at 13:16 +0100, Tim Deegan wrote: > > tools: fix build after recent xenpaging changes > > xenpaging now uses pthreads, so must link appropriately. > > Why does 23625:c49e22648d0e need a new thread to do the page in on exit? > Can't it just signal the main loop to do it? If the page is mappend and the gfn is not there, the attempt to map it may block. I havent tried it, and I think the current code will not block (linux_privcmd_map_foreign_bulk will just loop). If it does block, the mainloop can not proceed and process the page-in request. > Also page_in_trigger doesn't seem safe to me: > +void page_in_trigger(unsigned long gfn) > +{ > + if (!page_in_possible) > + return; > + > + pthread_mutex_lock(&page_in_mutex); > + page_in_gfn = gfn; > + pthread_mutex_unlock(&page_in_mutex); > + pthread_cond_signal(&page_in_cond); > +} > > Two back to back calls to this function (which is what the caller will > do) will both update page_in_gfn without the page in thread necessarily > running in the interim. i.e. the first gfn may be missed. I don't think > pthread_cond_signal makes any guarantees about whether this thread or > the signalled thread will run afterwards. For this approach to woek > page_in_gfn really needs to remain locked until the page in thread has > finished with that particular entry, or you need s return signal, or a > queue, or whatever. Its coded after an example in the APUE book. The page-in thread grabs a copy of page_in_gfn. Are you saying page_in_trigger() can be called more than once while xc_map_foreign_pages()/munmap() is being called? If the caller of page_in_trigger will find the gfn is still in paging state, it will just try again. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |