[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: [Queries] Unpinning and Unhooking shadow
At 10:47 +0530 on 15 Mar (1173955665), jeet wrote: > Sorry but this is still not clear to me how this will drop reference > count of low level shadow pointed by shadow Top PT entries by putting > 0 in shadow top level PT entries, which are PRESENT. shadow_unhook_mappings() calls sh_unhook_*_mappings() calls shadow_set_l*e() with an empty entry calls sh_put_ref() on the MFN in the previous entry calls sh_destroy_shadow() if that was the last reference to that MFN (which recursively drops references to lower level page tables and so on). > So when handling page fault after this unhooking (If as explained > above this will remove all references of low level shadow ) we need to > create all the low level shadow again recurrsively > (shadow_get_and_create_l1e()) and put the entry in top level shadow of > currently executing process. is my understanding correct? Only if we managed somehow to get as far as unhooking the entries in a currently running process. In the vast majority of cases the first pass (unpinning) will free enough memory before we get anywhere near this. > All also after both scan these Top level shadow will have pinned flag > clear. As we are not pinning them again after unpinning in > sh_unpin(). Right? Yes. Though if the guest reloads CR3 with the same value it will get repinned then. And in any case if we hit this case very often performance will already be shot by the cost of all those recursive teardowns. And if the shadow memory is properly provisioned, it's probably a sign that the guest is thrashing, so the shadow performance is the least of our problems. Cheers, Tim. -- Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, XenSource UK Limited Registered office c/o EC2Y 5EB, UK; company number 05334508 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |