[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] high memory dma update: up against a wall

> > I saw this problem on x86_64 with 6gigs ram, if i made dom0 
> too big, 
> > the allocator put it in high memory, the linux kernel 
> booted fine, but 
> > the partition scan failed, and it couldn't mount root.
> Why not have the allocator force all driver domains to be in 
> memory < 4GB?

It's irelevant whether the driver domains are in memory below 4GB --
they are passed pages by other domains which they want to DMA into.

It's clear that privileged domains need to support bounce buffers for
hardware that can't DMA above 4GB.

We could try and optimise the situation by giving each domain some
memory below 4GB so that it can maintain a zone to use in preference for
skb's etc. It can't help for most block IO, since pretty much any of the
domain's pages can be a target.

However, I'm not convinced that its worth implementing such a soloution.

Keir and I just looked in Linux's driver directory and found that pretty
much all the chips used in server hardware over the last few years are
>4GB capable: tg3, e1000, mpt_fusion, aacraid, megaraid, aic7xxx etc.
The only exception seems to be ide/sata controllers. 

For the latter, having sperate memory zones won't help. We need to use
the gart or other io mmu to translate the DMA in the driver domain.

I think we just go with bounce buffers for the moment, and add io mmu
support once we've had a chance to discuss it further. I suspect that on
most server hardware we won't need it anyway.

[Is there much extant hardware with >4GB of memory that doesn't have
disk or network hardware that are capable of DMA above 4GB? My guess
would be no, but can anyone put forward hard data?]


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.