[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Crashing kernel with dom0/libxc gnttab/gntshr



On Fri, 2013-08-02 at 18:02 +0100, Stefano Stabellini wrote:
> On Fri, 2 Aug 2013, Jeremy Fitzhardinge wrote:
> > On 08/02/2013 06:50 AM, Stefano Stabellini wrote:
> > > Jeremy, at the time the code was written, you were pretty confident
> > > that page->lru couldn't be used by anybody else.
> > > Why was that?
> > 
> > Hm. Probably the reasoning was that page->lru was only used for pages
> > which in the pagecache, mapped from files, and m2p pages are never
> > mapped from files. But maybe something else has decided to use lru for
> > non-mapped pages (transparent hugepage? page dedup?), or are m2p pages
> > getting into the pagecache somehow?
> > 
> 
> I think it could be the latter.
> For example we have recently changed QEMU not to use O_DIRECT on foreign
> grants to work around a network bug in the kernel.
> It might be possible that these pages end up in the pagecache after they
> have been already added to the m2p.

Vincent's test programs (one posted at the root of this thread and
another a multiprocess version a few mails in) doesn't do any explicit
I/O on the shared pages at all, it literally doesn't touch them.

The test program is:
        allocate
        share
        map
        unmap
        crash

The second version moves the map/unmap/crash into a separate process
(achieved with fork). I suppose it might still be interesting to split
into two completely separate executables to check for weird cross talk
between share and map in related (i.e. parent-child) processes.

I hope the gntshr interface locks pages down so that we aren't worrying
about swapping etc, but this doesn't appear to be at all probabilistic
in any case.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.