[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: pci passthrough xhci host controller



On Tue, Sep 21, 2010 at 10:03:10PM +0200, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> I indeed have the feeling the memleak's aren't huge, and adding the diverse 
> kernel hacking debug options, ended op doing more wrong than right.
> I have turned off the options i added, re-instated the "swiotlb=force" in the 
> domU config to see if it goes from a working to a freezing config, but i have 
> the feeling it will not make a difference.
> 
> Then i have 4 differences left:
> 
> - Other dom0 kernel since the tests resulting in continous freezes of my 
> server
> - Other domU kernel since the tests resulting in continous freezes of my 
> server
> - Other workload (server is running more VM's)
> - Other physical hardware
>         - server is AMD phenom X6, current config Intel quad core
>         - Both have there iommu disabled
>         - Both are 64 capable cpu's with 64 xen, dom0 and domU
> 
>         - But most notably perhaps, the intel has only 2GB RAM, the server 8GB
> 
> Could the available physical RAM be an issue here ?
> I limit the ram for dom0 with dom0_mem=

OK, but that would not limit the memory of where the guest get their memory. I 
think
you might need this in conjunction with maxmem, say: maxmem=4GB 
dom0_mem=max:512MB

This way your 8GB machine has 4GB of memory available for both dom0 and the 
guest.

> 
> After this test succeeds on the intel machine, i will retry the samen 
> xen,dom0 kernel and domU kernel on the AMD config.
> Is there anything i can especially log/configure/debug to get more detail to 
> see if the 8GB could be the problem ?

I think we have concluded that the device in question (3.0 PCIe USB host 
controller) can do
64-bit DMA. In which case the SWIOTLB is only used as an address translation 
system
(pfn -> mfn, and vice-versa). If it was 32-bit it would also be utilized for 
bouncing
the DMA buffers - there are sometimes cases were the driver does not sync after 
the bounce
(perfect examples are the existing radeon/nouveau drivers) ending up with 
corruption/hanged
device. But those show up early in development, and this is the new USB 
controller than
can do 64-bit instead of the dreaded 32-bit limit that all other USB 
controllers are stuck
with it.

The memory difference might be a red-herring. It could be the workload - more 
VMs
and a latency issue (say we are waiting for an IRQ and it comes just a bit too 
late)?
I think the idea of narrowing down on the AMD machine the amount of memory 
could help.

What is the exact model of your USB capture device and the USB PCI device?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.