[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] swiotlb=force in Konrad's xen-pcifront-0.8.2 pvops domU kernel with PCI passthrough
Excerpts from Konrad Rzeszutek Wilk's message of 2010-11-18 12:19:36 -0500: > Keir, Dan, Mathieu, Chris, Mukesh, > > This fellow is passing in a PCI device to his Xen PV guest and trying > to get high IOPS. The kernel he is using is a 2.6.36 with tglx's > sparse_irq rework. > > > I wanted to confirm that bounce buffering was indeed occurring so I > > modified swiotlb.c in the kernel and added printks in the following > > functions: > > swiotlb_bounce > > swiotlb_tbl_map_single > > swiotlb_tbl_unmap_single > > Sure enough we were calling all 3 five times per I/O. We took your > > suggestion and replaced pci_map_single with pci_pool_alloc. The > > swiotlb calls were gone but the I/O performance only improved 6% (29k > > IOPS to 31k IOPS) which is still abysmal. > > Hey! 6% that is nothing to sneeze at. How fast does it go on bare metal? I usually do four things: 1) perf record -g -a -f 'sleep 15' (use perf report to look at the biggest CPU hogs) 2) mpstat -P ALL 1 to find the CPU doing all the softirq processing 3) perf record -g -C N -f 'sleep 15' where N was the CPU in mpstat -P ALL that was doing all the softirq processing 4) Turn off the intel iommu. This isn't an option of for virtualized though, but I'd try it on/off on bare metal. -chris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |