[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [Queries] Unpinning and Unhooking shadow


  • To: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>
  • From: jeet <jeet_sat12@xxxxxxxxxxx>
  • Date: Thu, 15 Mar 2007 10:47:45 +0530 (IST)
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 14 Mar 2007 22:16:51 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.co.in; h=X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=HqbSo4TOX6rr8ALJn5R8rVKign0FamXSzM6u8cmyCUQx7P5U6hyWSC5fuUWpFl2HSVtnzrNHrn2rs9ax+2UvuC98xXoqNVodEp1YZxp2RbQCggD2YposVE0dNGcQF6Y09r28beaEwFctjRH7Pb7RbudWciItvrE5+HmmLSRPFPo=;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

HI Tim 

Thanks for reply

> 1. we do back traversing of per domain list of top level shadow pages and try 
> to unpin them using function call sh_unpin()
> 
> but in unpinning shadow we are unsetting pin bit in page->count_info and 
> decrement the reference count of shadow page
> using call to sh_put_ref() [define in xen/arch/x86/mm/shadow/private.h]
> 
> but in this function I am not able to understand why this condition is 
> unlikely? 
> 
> if ( unlikely(nx == 0) ) 
>         sh_destroy_shadow(v, smfn);

| Because sh_put_ref is called in lots of other places too...

> as we are trying to make space for new shadow which would be created using 
> shadow_alloc() 
> so this sh_destroy_shadow must be called for any one of the entry in toplevel 
> list to free space of at least required order 
> and put back the pages in freelist of shadow pool ?

| I don't understand your question.
|
| When we need to free up some shadow memory, we walk the list of pinned
| shadows and unpin them.  We walk them in reverse order because we hope
| that less useful shadows will be at the end.  
|
| When we unpin a shadow, if it's not in use right now, its refcount falls
| to zero, which causes the whole tree of shadows below it to be
| recursively destroyed. 

My question was for confirming my understanding that during unpinning it should 
destroy shadow recursively. 
Thanks for clarifying my understanding

| If that doesn't free up enough memeory, we try the next one.

> 3. Also after this if still space is not free then we try to unhooking the 
> same toplevel list  by going though each entry in list and 
> marking corresponding PML4 table's entries as 0 if that entry was marked 
> PRESENT.
> 
> But I am not able to understand how this will return pages back to per domain 
> freelist of shadow pages?

| Because it will drop the refcount on all the lower-level shadows that
| were being pointed to by those entries, which will cause some of them to
| be destroyed.

Sorry but this is still not clear to me how this will drop reference count of 
low level shadow pointed by shadow Top PT entries 
by putting 0 in shadow top level PT entries,  which are PRESENT.

And how & where these low level shadow would be destroyed?.
please explain this more.

| 
| When we have completed both scans of the pinned-shadows list, the only
| shadows still allocated will be the top-level shadows for each vcpu in
| the guest, and they will have no lower-level shadows attached to them.

So when handling page fault after this unhooking (If as explained above this 
will remove all references of low level shadow ) we need to create all the low 
level shadow again recurrsively (shadow_get_and_create_l1e()) and put the entry 
in top level shadow of currently executing process. is my understanding correct?

All also after both scan these Top level shadow will have pinned flag clear.  
As we are not pinning them again after unpinning in sh_unpin(). Right?


Jeet
____




                
__________________________________________________________
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.