[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] lock in vhpet



At 02:36 +0000 on 25 Apr (1335321409), Zhang, Yang Z wrote:
> > > But actually, the first cs introduced this issue is 24770. When win8
> > > booting and if hpet is enabled, it will use hpet as the time source
> > > and there have lots of hpet access and EPT violation. In EPT violation
> > > handler, it call get_gfn_type_access to get the mfn. The cs 24770
> > > introduces the gfn_lock for p2m lookups, and then the issue happens.
> > > After I removed the gfn_lock, the issue goes. But in latest xen, even
> > > I remove this lock, it still shows high cpu utilization.
> > 
> > It would seem then that even the briefest lock-protected critical section 
> > would
> > cause this? In the mmio case, the p2m lock taken in the hap fault handler is
> > held during the actual lookup, and for a couple of branch instructions
> > afterwards.
> > 
> > In latest Xen, with lock removed for get_gfn, on which lock is time spent?
> Still the p2m_lock.

Can you please try the attached patch?  I think you'll need this one
plus the ones that take the locks out of the hpet code. 

This patch makes the p2m lock into an rwlock and adjusts a number of the
paths that don't update the p2m so they only take the read lock.  It's a
bit rough but I can boot 16-way win7 guest with it.

N.B. Since rwlocks don't show up the the existing lock profiling, please
don't try to use the lock-profiling numbers to see if it's helping!

Andres, this is basically the big-hammer version of your "take a
pagecount" changes, plus the change you made to hvmemul_rep_movs().
If this works I intend to follow it up with a patch to make some of the
read-modify-write paths avoid taking the lock (by using a
compare-exchange operation so they only take the lock on a write).  If
that succeeds I might drop put_gfn() altogether. 

But first it will need a lot of tidying up.  Noticeably missing:
 - SVM code equivalents to the vmx.c changes
 - grant-table operations still use the lock, because frankly I 
   could not follow the current code, and it's quite late in the evening.
I also have a long list of uglinesses in the mm code that I found while
writing this lot. 

Keir, I have no objection to later replacing this with something better
than an rwlock. :)  Or with making a NUMA-friendly rwlock
implementation, since I really expect this to be heavily read-mostly
when paging/sharing/pod are not enabled.

Cheers,

Tim.

Attachment: get-page-from-gfn
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.