[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Performance issues with large grants


  • To: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Thu, 23 Sep 2010 22:10:46 +0100
  • Cc:
  • Delivery-date: Thu, 23 Sep 2010 14:11:34 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=mEVYexmK65mKebE7EPnr01yKhzq43d5DdOWIVehDtSJEZ4ZahIkJvcCBYle8mzyHHU 8v6DQzJVfsFTIlJEp0eqyFlgLuGL+hVn02LztpM7cTKEI+WCXvwQEll32Y7r3Gyxh+AO VObkAhsfmGkoiuNObHUzuaQ3hACoFqCTsyliw=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: ActbY8oO93NdlK21IESMPCJv4NoVmQ==
  • Thread-topic: [Xen-devel] Performance issues with large grants

On 23/09/2010 22:03, "Daniel De Graaf" <dgdegra@xxxxxxxxxxxxx> wrote:

> I am trying to map a large number of pages in an HVM domain and am
> running into significant performance issues. Currently, mapping 1280
> pages will take 11 seconds to complete, which is not very usable. I
> have determined that the most costly part of the update is the IOMMU
> flush which takes about 9ms per page (the domain doing the mapping
> has a PCI device forwarded to it). Allowing this update to be disabled
> and batched to the end of the update improves the speed to around 800ms,
> but this introduces instability in the network drivers, which stop
> receiving host-to-guest packets.
> 
> Is there a better method to batch the P2M updates for grant table
> modifications that I should try? The improved speed is still slower
> than I would expect (about 40ms), so there are other high-cost actions
> being performed in addition to the IOMMU flush. I have also considered
> allowing mapping 2M pages between domains as an alternate solution;
> this would be a larger API change, and I'd be interested in comments on
> the general usefulness of 2M grants.

I think an HVM guest chooses what pseudo-physical address a grant gets
mapped at? I would allocate mappings across a larger region of
pseudo-physical address space and arrange to flush only when wrapping back
to the start of that larger region. For correctness (while losing some
strict isolation) you only need to flush between reuses of any given
pseudo-physical frame.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.