[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Load increase after memory upgrade (part2)



Hello Konrad,

Tuesday, January 10, 2012, 10:55:33 PM, you wrote:

> On Mon, Dec 19, 2011 at 10:56:09AM -0400, Konrad Rzeszutek Wilk wrote:
>> On Sun, Dec 18, 2011 at 01:19:16AM +0100, Sander Eikelenboom wrote:
>> > I also have done some experiments with the patch, in domU i also get the 
>> > 0% full for my usb controllers with video grabbers , in dom0 my i get 12% 
>> > full, both my realtek 8169 ethernet controllers seem to use the bounce 
>> > buffering ...
>> > And that with a iommu (amd) ? it all seems kind of strange, although it is 
>> > also working ...
>> > I'm not having much time now, hoping to get back with a full report soon.
>> 
>> Hm, so domU nothing, but dom0 it reports. Maybe the patch is incorrect
>> when running as PV guest .. Will look in more details after the
>> holidays. Thanks for being willing to try it out.

> Good news is I am able to reproduce this with my 32-bit NIC with 3.2 domU:

> [  771.896140] SWIOTLB is 11% full
> [  776.896116] 0 [e1000 0000:00:00.0] bounce: from:222028(slow:0)to:2 
> map:222037 unmap:227220 sync:0
> [  776.896126] 1 [e1000 0000:00:00.0] bounce: from:0(slow:0)to:5188 map:5188 
> unmap:0 sync:0
> [  776.896133] 3 [e1000 0000:00:00.0] bounce: from:0(slow:0)to:1 map:1 
> unmap:0 sync:0

> but interestingly enough, if I boot the guest as the first one I do not get 
> these bounce
> requests. I will shortly bootup a Xen-O-Linux kernel and see if I get these 
> same
> numbers.


I started to expiriment some more with what i encountered.

On dom0 i was seeing that my r8169 ethernet controllers where using bounce 
buffering with the dump-swiotlb module.
It was showing "12% full".
Checking in sysfs shows:
serveerstertje:/sys/bus/pci/devices/0000:09:00.0# cat consistent_dma_mask_bits
32
serveerstertje:/sys/bus/pci/devices/0000:09:00.0# cat dma_mask_bits
32

If i remember correctly wasn't the allocation for dom0 changed to be to the top 
of memory instead of low .. somewhere between 2.6.32 and 3.0 ?
Could that change cause the need for all devices to need bounce buffering  and 
could it therefore explain some people seeing more cpu usage for dom0 ?

I have forced my r8169 to use 64bits dma mask (using use_dac=1)
serveerstertje:/sys/bus/pci/devices/0000:09:00.0# cat consistent_dma_mask_bits
32
serveerstertje:/sys/bus/pci/devices/0000:09:00.0# cat dma_mask_bits
64

This results in dump-swiotlb reporting:

[ 1265.616106] 0 [r8169 0000:09:00.0] bounce: from:5(slow:0)to:0 map:0 unmap:0 
sync:10
[ 1265.625043] SWIOTLB is 0% full
[ 1270.626085] 0 [r8169 0000:08:00.0] bounce: from:6(slow:0)to:0 map:0 unmap:0 
sync:12
[ 1270.635024] SWIOTLB is 0% full
[ 1275.635091] 0 [r8169 0000:09:00.0] bounce: from:5(slow:0)to:0 map:0 unmap:0 
sync:10
[ 1275.644261] SWIOTLB is 0% full
[ 1280.654097] 0 [r8169 0000:09:00.0] bounce: from:5(slow:0)to:0 map:0 unmap:0 
sync:10



So it has changed from 12% to 0%, although it still reports something about 
bouncing ? or am i mis interpreting stuff ?


Another thing i was wondering about, couldn't the hypervisor offer a small 
window in 32bit addressable mem to all (or only when pci passthrough is used) 
domU's to be used for DMA ?

(oh yes, i haven't got i clue what i'm talking about ... so it probably make no 
sense at all :-) )


--
Sander




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.