[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Performance issues with large grants
On 23/09/2010 22:03, "Daniel De Graaf" <dgdegra@xxxxxxxxxxxxx> wrote: > I am trying to map a large number of pages in an HVM domain and am > running into significant performance issues. Currently, mapping 1280 > pages will take 11 seconds to complete, which is not very usable. I > have determined that the most costly part of the update is the IOMMU > flush which takes about 9ms per page (the domain doing the mapping > has a PCI device forwarded to it). Allowing this update to be disabled > and batched to the end of the update improves the speed to around 800ms, > but this introduces instability in the network drivers, which stop > receiving host-to-guest packets. > > Is there a better method to batch the P2M updates for grant table > modifications that I should try? The improved speed is still slower > than I would expect (about 40ms), so there are other high-cost actions > being performed in addition to the IOMMU flush. I have also considered > allowing mapping 2M pages between domains as an alternate solution; > this would be a larger API change, and I'd be interested in comments on > the general usefulness of 2M grants. I think an HVM guest chooses what pseudo-physical address a grant gets mapped at? I would allocate mappings across a larger region of pseudo-physical address space and arrange to flush only when wrapping back to the start of that larger region. For correctness (while losing some strict isolation) you only need to flush between reuses of any given pseudo-physical frame. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |