[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Poor performance on HVM (kernbench)


  • To: "Gianluca Guida" <gianluca.guida@xxxxxxxxxxxxx>
  • From: "George Dunlap" <dunlapg@xxxxxxxxx>
  • Date: Fri, 12 Sep 2008 12:19:12 +0100
  • Cc: Muli Ben-Yehuda <MULI@xxxxxxxxxx>, deshantm@xxxxxxxxx, Anthony Liguori <aliguori@xxxxxxxxxx>, xen-devel mailing list <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 12 Sep 2008 04:19:37 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references:x-google-sender-auth; b=kFfP05tHK0HUfeann0Imuq0jRa0vKOJIjBAe6OrED/65IZ0D/VKysW1vuLbHkRMrfd cjvnl+8r8/wF5wpXx/4TRjBp9NOvzso0zxIdlvR4cCxjlpLq7GbZH8oJbeDT4thvC+Ss nREnufRlMvOohYjjf1tCSVPC0+Own3/ziLIEU=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Ah, that's the problem... Linux seems to have changed the location of
the 1:1 map.  Gianluca's using an older kernel, where it's at
0xffff810000000000, but this trace has it at 0xffff880000000000, so
the "guess" heuristic is missing.

Jereme, is this a permanent long-term move, or is it going to be
something random?  I.e., should we just add a new heuristic "guess" at
this address, or do we need to do something more complicated?

That will solve brute-force searched for promotions, but the fixup
table for out-of-sync mappings should still be fixed...

 -George

On Fri, Sep 12, 2008 at 12:04 PM, George Dunlap <dunlapg@xxxxxxxxx> wrote:
> On Thu, Sep 11, 2008 at 7:26 PM, Gianluca Guida
> <gianluca.guida@xxxxxxxxxxxxx> wrote:
>> Or, it could be a fixup table bug, but I doubt it.
>>
>> George, did you saw excessive fixup faults in the trace?
>
> No, nothing excessive; 273,480 over 30 seconds isn't that bad.  The
> main thing was that out of 15024 attempts to remove writable mappings,
> 13775 had to fall back to a brute-force search.
>
> Looking at the trace, I can't really tell why there should be a
> problem... I'm seeing tons of circumstances where there should only be
> one writable mapping, but it falls back to brute-force search anyway.
> Here's an example:
>
>  24.999159660 -x  vmexit exit_reason EXCEPTION_NMI eip 2b105dcee330
>  24.999159660 -x  wrmap-bf gfn 7453c
>  24.999159660 -x fixup va 2b105f000000 gl1e 800000005caf0067 flags
> (60c)-gp-Pw------
>  24.999748980 -x  vmentry
>  [...]
>  24.999759577 -x  vmexit exit_reason EXCEPTION_NMI eip ffffffff8022a3b0
>  24.999759577 -x fixup:unsync va ffff88007453c008 gl1e 7453c067 flags
> (c000c)-gp------ua-
>  24.999762562 -x  vmentry
>  [...]
>  25.002946338 -x  vmexit exit_reason CR_ACCESS eip ffffffff80491a63
>  25.002946338 -x  wrmap-bf gfn 7e18c
>  25.002946338 -x  oos resync full gfn 7e18c
>  25.002946338 -x  wrmap-bf gfn 7453c
>  25.002946338 -x  oos resync full gfn 7453c
>  25.003526640 -x  vmentry
>
> Here we see gfn 7453c:
>  * promoted to be a shadow (the big 'P' in the flag string); at the
> vmentry, there should be no writable mappings.
>  * marked out-of-sync (one writable mapping, with fixup table)
>  * re-sync'ed because of a CR write, and a brute-force search.
>
> Note that the times behind the "wrmap-bf" and "oos resync full" are
> not valid; but the whole vmexit->vmentry arc takes over 1.5
> milliseconds.
>
>  -George
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.