[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] page_alloc: use first half of higher order chunks when halving



On Wed, Mar 26, 2014 at 12:17:53PM +0200, Matt Wilson wrote:
> On Wed, Mar 26, 2014 at 10:55:33AM +0100, Tim Deegan wrote:
> > Hi,
> > 
> > At 13:09 -0700 on 25 Mar (1395749353), Matthew Rushton wrote:
> > > On 03/25/14 06:27, Matt Wilson wrote:
> > > > On Tue, Mar 25, 2014 at 01:19:22PM +0100, Tim Deegan wrote:
> > > >> At 13:22 +0200 on 25 Mar (1395750124), Matt Wilson wrote:
> > > >>> From: Matt Rushton <mrushton@xxxxxxxxxx>
> > > >>>
> > > >>> This patch makes the Xen heap allocator use the first half of higher
> > > >>> order chunks instead of the second half when breaking them down for
> > > >>> smaller order allocations.
> > > >>>
> > > >>> Linux currently remaps the memory overlapping PCI space one page at a
> > > >>> time. Before this change this resulted in the mfns being allocated in
> > > >>> reverse order and led to discontiguous dom0 memory. This forced dom0
> > > >>> to use bounce buffers for doing DMA and resulted in poor performance.
> > > >> This seems like something better fixed on the dom0 side, by asking
> > > >> explicitly for contiguous memory in cases where it makes a difference.
> > > >> On the Xen side, this change seems harmless, but we might like to keep
> > > >> the explicitly reversed allocation on debug builds, to flush out
> > > >> guests that rely on their memory being contiguous.
> > > > Yes, I think that retaining the reverse allocation on debug builds is
> > > > fine. I'd like Konrad's take on if it's better or possible to fix this
> > > > on the Linux side.
> > > 
> > > I considered fixing it in Linux but this was a more straight forward 
> > > change with no downside as far as I can tell. I see no reason in not 
> > > fixing it in both places but this at least behaves more reasonably for 
> > > one potential use case. I'm also interested in other opinions.
> > 
> > Well, I'm happy enough with changing Xen (though it's common code so
> > you'll need Keir's ack anyway rather than mine), since as you say it
> > happens to make one use case a bit better and is otherwise harmless.
> > But that comes with a stinking great warning:
> 
> Anyone can Ack or Nack, but I wouldn't want to move forward on a
> change like this without Keir's Ack. :-)
> 
> >  - This is not 'fixing' anything in Xen because Xen is doing exactly
> >    what dom0 asks for in the current code; and conversely
> >
> >  - dom0 (and other guests) _must_not_ rely on it, whether for
> >    performance or correctness.  Xen might change its page allocator at
> >    some point in the future, for any reason, and if linux perf starts
> >    sucking when that happens, that's (still) a linux bug.
> 
> I agree with both of these. This was just the "least change" patch to
> a particular problem we observed.
> 
> Konrad, what's the possibility of fixing this in Linux Xen PV setup
> code? I think it'd be a matter batching up pages and doing larger
> order allocations in linux/arch/x86/xen/setup.c:xen_do_chunk(),
> falling back to smaller pages if allocations fail due to
> fragmentation, etc.

Could you elaborate a bit more on the use-case please?
My understanding is that most drivers use a scatter gather list - in which
case it does not matter if the underlaying MFNs in the PFNs spare are
not contingous.

But I presume the issue you are hitting is with drivers doing dma_map_page
and the page is not 4KB but rather large (compound page). Is that the
problem you have observed?

Thanks.
> 
> --msw

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.