[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] lock in vhpet


  • To: "Zhang, Yang Z" <yang.z.zhang@xxxxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Tue, 24 Apr 2012 19:31:26 -0700
  • Cc: Keir Fraser <keir@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Wed, 25 Apr 2012 02:31:56 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=ooe3ZnaobgZt7wOjGQRLQvmF8q8plv85KXBu8rNjgvRL zvr92TliqMEc/g2zYJK6jVeoamocAOUA0RkRNf7TJdo1uxmOmXl0Bf2hPnDHnLc0 jOVX5BqXTJBNLOLbA941x7dhvK4goyI+6PpTH+7LUtUMsahWXUa0UwDIV3+VdlQ=
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

>
>> -----Original Message-----
>> From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx]
>> Sent: Wednesday, April 25, 2012 9:40 AM
>> To: Zhang, Yang Z
>> Cc: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser
>> Subject: RE: [Xen-devel] lock in vhpet
>>
>> >> -----Original Message-----
>> >> From: Tim Deegan [mailto:tim@xxxxxxx]
>> >> Sent: Tuesday, April 24, 2012 5:17 PM
>> >> To: Zhang, Yang Z
>> >> Cc: andres@xxxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir
>> >> Fraser
>> >> Subject: Re: [Xen-devel] lock in vhpet
>> >>
>> >> At 08:58 +0000 on 24 Apr (1335257909), Zhang, Yang Z wrote:
>> >> > > -----Original Message-----
>> >> > > From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx]
>> >> > > Sent: Tuesday, April 24, 2012 1:19 AM
>> >> > >
>> >> > > Let me know if any of this helps
>> >> > No, it not works.
>> >>
>> >> Do you mean that it doesn't help with the CPU overhead, or that it's
>> >> broken in some other way?
>> >>
>> > It cannot help with the CPU overhead
>>
>> Yang, is there any further information you can provide? A rough idea of
>> where
>> vcpus are spending time spinning for the p2m lock would be tremendously
>> useful.
>>
> I am doing the further investigation. Hope can get more useful
> information.

Thanks, looking forward to that.

> But actually, the first cs introduced this issue is 24770. When win8
> booting and if hpet is enabled, it will use hpet as the time source and
> there have lots of hpet access and EPT violation. In EPT violation
> handler, it call get_gfn_type_access to get the mfn. The cs 24770
> introduces the gfn_lock for p2m lookups, and then the issue happens. After
> I removed the gfn_lock, the issue goes. But in latest xen, even I remove
> this lock, it still shows high cpu utilization.
>

It would seem then that even the briefest lock-protected critical section
would cause this? In the mmio case, the p2m lock taken in the hap fault
handler is held during the actual lookup, and for a couple of branch
instructions afterwards.

In latest Xen, with lock removed for get_gfn, on which lock is time spent?

Thanks,
Andres

> yang
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.