[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Dom0 physical networking/swiotlb/something issue in 3.7-rc1



On Fri, Oct 12, 2012 at 01:18:16PM +0100, Ian Campbell wrote:
> On Fri, 2012-10-12 at 13:10 +0100, Konrad Rzeszutek Wilk wrote:
> > On Fri, Oct 12, 2012 at 07:59:49AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Oct 12, 2012 at 11:28:08AM +0100, Ian Campbell wrote:
> > > > Hi Konrad,
> > > > 
> > > > The following patch causes fairly large packet loss when transmitting
> > > > from dom0 to the physical network, at least with my tg3 hardware, but I
> > > > assume it can impact anything which uses this interface.
> > > 
> > > Ah, that would explain why one of my machines suddenly started
> > > developing checksum errors (and had a tg3 card). I hadn't gotten
> > > deep into it.
> > > > 
> > > > I suspect that the issue is that the compound pages allocated in this
> > > > way are not backed by contiguous mfns and so things fall apart when the
> > > > driver tries to do DMA.
> > > 
> > > So this should also be easily reproduced on barmetal with 'iommu=soft' 
> > > then.
> > > > 
> > > > However I don't understand why the swiotlb is not fixing this up
> > > > successfully? The tg3 driver seems to use pci_map_single on this data.
> > > > Any thoughts? Perhaps the swiotlb (either generically or in the Xen
> > > > backend) doesn't correctly handle compound pages?
> > > 
> > > The assumption is that it is just a page. I am surprsed that the other
> > > IOMMUs aren't hitting this as well - ah, that is b/c they do handle
> > > a virtual address of more than one PAGE_SIZE..
> > 
> > So.. the GART one (AMD poor man IOTLB - was used for AGP card
> > translation, but can still be used as an IOMMU - and is still present on
> > some AMD machines), looks to suffer the same problem.
> > 
> > But perhaps not - can you explain to me if a compound page
> > is virtually contingous? One of the things the GART does for
> > pci_map_single is call page_to_phys(p), feeds the CPU physical address
> > (and size) into the GART engine to setup the mapping.
> > 
> > If compound pages are virtually (and physically on barmetal) contingous
> > - this ought to work. But if they are not, then this should also break on
> > AMD machines with tg3 and a AMD GART enabled.
> 
> AFAIK compound pages are always physically contiguous. i.e. given a
> "struct page *page" which is the head of a compound page you can do
> "page++" to walk through its constituent frames.
> 
> I'm not sure about virtually contiguous. Obviously if they are in lowmem
> then the 1-1 map combined with the fact that they are physically
> contiguous makes them virtually contiguous too. I'm not sure what
> happens if they are highmem -- since kmap (or whatever) would need to do
> some extra work in this case. I've not looked but I don't recall
> noticing this in the past...

So to double check this, I wrote this nice little module (attached)
that would allocate these type of pages and do 'DMA' on them.

>From the tests it seems to work OK - in some cases it uses a bounce
buffer and in some it does not. And the resulting buffers do contain
the data we expected.

# modprobe dma_test
modprobe dma_test
calling  dma_test_init+0x0/0x1000 [dma_test] @ 2875
initcall dma_test_init+0x0/0x1000 [dma_test] returned 0 after 309 usecs
fallback_bus: to_cpu: va: ffff8800642dd000 (pfn:642dd, mfn:53706) w.r.t prev 
mfn: 53707!
fallback_bus: to_cpu: va: ffff8800642de000 (pfn:642de, mfn:53705) w.r.t prev 
mfn: 53706!
fallback_bus: to_cpu: va: ffff8800642df000 (pfn:642df, mfn:53704) w.r.t prev 
mfn: 53705!
fallback_bus: to_cpu: ffff8800642dc000 (pfn:642dc, bus frame: 53707) <= 
ffff880070046000 (addr: 70046000, frame: 186)
fallback_bus: to_cpu: ffff8800642dd000 (pfn:642dd, bus frame: 53706) <= 
ffff880070047000 (addr: 70047000, frame: 187)
fallback_bus: to_cpu: ffff8800642de000 (pfn:642de, bus frame: 53705) <= 
ffff880070048000 (addr: 70048000, frame: 188)
fallback_bus: to_cpu: ffff8800642df000 (pfn:642df, bus frame: 53704) <= 
ffff880070049000 (addr: 70049000, frame: 189)
fallback_bus: to_dev: va: ffff880059521000 (pfn:59521, mfn:488c2) w.r.t prev 
mfn: 488c3!
fallback_bus: to_dev: va: ffff880059522000 (pfn:59522, mfn:488c1) w.r.t prev 
mfn: 488c2!
fallback_bus: to_dev: va: ffff880059523000 (pfn:59523, mfn:488c0) w.r.t prev 
mfn: 488c1!
fallback_bus: to_dev: va: ffff880059524000 (pfn:59524, mfn:488bf) w.r.t prev 
mfn: 488c0!
fallback_bus: to_dev: va: ffff880059525000 (pfn:59525, mfn:488be) w.r.t prev 
mfn: 488bf!
fallback_bus: to_dev: va: ffff880059526000 (pfn:59526, mfn:488bd) w.r.t prev 
mfn: 488be!
fallback_bus: to_dev: va: ffff880059527000 (pfn:59527, mfn:488bc) w.r.t prev 
mfn: 488bd!
fallback_bus: to_dev: 0xffff88007004a000(bounce)  <=  0xffff880059520000 (sz: 
32768)
fallback_bus: to_dev: ffff880059520000 (pfn:59520, bus frame: 488c3) => 
ffff88007004a000 (addr: 7004a000, frame: 18a)
fallback_bus: to_dev: ffff880059521000 (pfn:59521, bus frame: 488c2) => 
ffff88007004b000 (addr: 7004b000, frame: 18b)
fallback_bus: to_dev: ffff880059522000 (pfn:59522, bus frame: 488c1) => 
ffff88007004c000 (addr: 7004c000, frame: 18c)
fallback_bus: to_dev: ffff880059523000 (pfn:59523, bus frame: 488c0) => 
ffff88007004d000 (addr: 7004d000, frame: 18d)
fallback_bus: to_dev: ffff880059524000 (pfn:59524, bus frame: 488bf) => 
ffff88007004e000 (addr: 7004e000, frame: 18e)
fallback_bus: to_dev: ffff880059525000 (pfn:59525, bus frame: 488be) => 
ffff88007004f000 (addr: 7004f000, frame: 18f)
fallback_bus: to_dev: ffff880059526000 (pfn:59526, bus frame: 488bd) => 
ffff880070050000 (addr: 70050000, frame: 190)
fallback_bus: to_dev: ffff880059527000 (pfn:59527, bus frame: 488bc) => 
ffff880070051000 (addr: 70051000, frame: 191)


fallback_bus: to_dev: ffff880059520000 with DMA (18a000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_dev: ffff880059521000 with DMA (18b000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_dev: ffff880059522000 with DMA (18c000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_dev: ffff880059523000 with DMA (18d000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_dev: ffff880059524000 with DMA (18e000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_dev: ffff880059525000 with DMA (18f000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_dev: ffff880059526000 with DMA (190000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_dev: ffff880059527000 with DMA (191000) has ffffffcc (expected 
ffffffcc)
fallback_bus: to_cpu: 0xffff880070046000(bounce)  =>  0xffff8800642dc000 (sz: 
16384)
fallback_bus: to_cpu: ffff8800642dc000 with DMA (186000) has ffffffdd (expected 
ffffffdd)
fallback_bus: to_cpu: ffff8800642dd000 with DMA (187000) has ffffffdd (expected 
ffffffdd)
fallback_bus: to_cpu: ffff8800642de000 with DMA (188000) has ffffffdd (expected 
ffffffdd)
fallback_bus: to_cpu: ffff8800642df000 with DMA (189000) has ffffffdd (expected 
ffffffdd)
fallback_bus: to_cpu: 0xffff880070046000(bounce)  =>  0xffff8800642dc000 (sz: 
16384)

> 
> Ian.

Attachment: dma_test.c
Description: Text document

Attachment: 0001-swiotlb-Add-debugging.patch
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.