[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH]Further release shadow lock overhead


  • To: "Tim Deegan" <Tim.Deegan@xxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Tue, 22 Jan 2008 21:23:37 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 22 Jan 2008 05:25:26 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Achc2o/TCHESXnReRI2AVXhNRNp15AAAzarQ
  • Thread-topic: [PATCH]Further release shadow lock overhead

>From: Tim Deegan [mailto:Tim.Deegan@xxxxxxxxxx] 
>Sent: 2008年1月22日 17:37
>
>Hi,
>
>At 14:10 +0800 on 22 Jan (1201011038), Tian, Kevin wrote:
>>      Please take a look whether this optimization is
>> feasible. :-)
>
>It goes just a little bit too far. :)  The shadow lock must be taken
>before the guest pagetable is modified and not released until after the
>shadow has been modified to match it.
>
>It might be possible to delay taking the lock until after the 
>guest walk
>and the mappings are done, but I'm not sure of it.
>
>Cheers,
>
>Tim.
>

In the start with this idea, I also came up same concern. 
But after more thinking, I think it should be safe since we
are emulating a local processor behavior.

A sane guest won't have concurrent updates to same
entry or change one entry at the time being walked by
another processor. However for architecture correctness,
shadow code should be able to handle insane cases, which
is the natural reason for the current lock range.

However if we look into above insane cases and consider
real processor behavior, actually this lock-release patch
won't make things different.

As stated above, we may have two types of insane scenarios
regarding page table race: (let's take 2-cpu environment)

#1 - Updates to same entry are issued on both cpus
#2 - Update to one entry when it's being walked by the
       other cpu

Let's see how physical processors behave:
#1 - the order of two updates are undefined. Either one can
      be effective as the last update. But processors ensure
      atomic update, i.e, no partial content is in effect
#2 - either old mapping or new mapping may be observed
      by cpu walking page table, depends on when the walk
      happen upon update from the other cpu. Atomic update
      is still honored

Then let's see current shadow write emulation logic:
a) acquire shadow lock
b) walk guest page table and map it
c) update guest entry and validate its content to sync shadow
d) release shadow lock

#1 - Still, order of two writes are undefined. Lock is required
      to protect shadow-sync as a domain-wise shared data. 

#2 - Say cpu0 is using a1-b1-c1-d1 to emulate write to va,
       while cpu1 is using a2-b2-c2-d2 to emulate write to gl1e
       mapping that va. In current logic, only two possibilities:
       1) a1-b1-c1-d1-a2-b2-c2-d2, with old mapping for va used
           and write emulation is done to an old frame
       2) a2-b2-c2-d2-a1-b1-c1-d1, with new mapping for va used
           and write emulation is done to a new frame

So even without attached patch, current logic still has two
possible results, to use either new or old mapping just like
physical processors.

With attached patch, we may get more combinations like:
b1-b2-a1-c1-d1-a2-c2-d2, ...
But the net effect doesn't change as long as atomic access
is honored.

Anyway, my points are:
        Shadow lock is just the lock for shadow page tables,
which can't be used to prevent guest page table race. Even
when guest table walk is included in shadow lock, there's no
determined result for insane guest behavior. As long as atomic
access is ensured, shadow can just behave like physical
processors to release lock...

I know such change should be careful, as you said shadow 
bug is most difficult to root cause. But I do want to share above
thought for your inputs. :-)

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.