[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [PATCH] Allow removing writable mappings from splintered page tables.


  • To: deshantm@xxxxxxxxx
  • From: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Tue, 16 Sep 2008 14:46:23 +0100
  • Cc: Gianluca Guida <gianluca.guida@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 16 Sep 2008 06:46:48 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references:x-google-sender-auth; b=Yui7S9n0GJd376PNGa+1ekJyCiMoShSMuH+dTJo+tDBZxJRqfESAjUaBa3VMKs0LCt NQzm7f/kMujw31+kKLTlv3geAaCzEqb7lu2AcEVl874VpDfJ1vp9XFuZxuNzAWqeen0Y nNXUFc408GX+jJ9hCUj/fRJblO8+wdpPvntsA=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hmm, no really obvious low-hanging fruit.  The Xen-HVM was about 9%
slower than your reported numbers for Xen-PV, and the trace shows that
the guest spent about that much inside the hypervisor.  The breakdown:
* 3.6% propagating page faults to guest
* 3.0% pulling through entries from out-of-sync guest pt's to shadow pagetables
* 1.4% Marking pages out of sync (of which 75% was in unsyncs that had
to re-sync another page)
* 0.9% cr3 switches
* 0.9% handling I/O

(Rounding may cause the numbers not to add up exactly.)

So one of the biggest things, really, is that Linux seems to insist on
mapping pages one-at-a-time as they're demand-faulted, rather than
doing a batch of them.  Unfortunately, having pages out-of-sync means
that we must use the slow propagate path rather than the
fast-propagate path, which is at least 25% slower.

The only avenues for optimization I can see are:
* See if there's a way to reduce the number of unsyncs that cause
resyncs.  Allowing more pages to go out-of-sync *might* do this; or it
might just shift the same overhead into cr3 switch.
* Reduce the time of "hot paths" through the hypervisor by profiling, &c.

 -George

On Mon, Sep 15, 2008 at 6:03 PM, George Dunlap
<George.Dunlap@xxxxxxxxxxxxx> wrote:
> Heh... the blatant copying is flattering and annoying at the same
> time. :-)  Ah, the beauty of open-source...
>
> I've got your trace, and I'll take a look at it tomorrow. Thanks!
>
>  -George
>
> On Mon, Sep 15, 2008 at 5:30 PM, Todd Deshane <deshantm@xxxxxxxxx> wrote:
>> On Mon, Sep 15, 2008 at 6:38 AM, George Dunlap
>> <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>> And your original numbers showed elapsed time to be 527s for KVM, so
>>> now Xen is 8 seconds in the lead for HVM Linux. :-)  Thanks for the
>>> help tracking this down!
>>>
>>
>> KVM is also working on improved page table algorithms
>> http://www.mail-archive.com/kvm@xxxxxxxxxxxxxxx/msg03562.html
>>
>> I think the competition is a good thing.
>>
>>> If you have time, could you take another 30-second trace with the new
>>> changes in, just for fun?  I'll take a quick look and see if there's
>>> any other low-hanging fruit to grab.
>>>
>>
>> Sent the trace to you with another service called sendspace, since, for
>> some reason, the trace file was much bigger.
>>
>> Todd
>>
>> --
>> Todd Deshane
>> http://todddeshane.net
>> check out our book: http://runningxen.com
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.