[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] Modify to introduce delayed p2m table destruction



  Thank you for your suggestion.

  We'll study it.

Thanks,
- Tsunehisa Doi


You (yamahata) said:
>>>>> However during shadow_teardown_xxx() in your patch
>>>>> other domain might access the p2m table and struct page_info.
>>>>> Page reference convension must be kept right during them.
>>>> 
>>>>   Yes, it might access them. In past, I thought so, but after
>>>> discussion about delayed p2m table destruction of shadow2, I've be
>>>> satisfied that get_page avoids memory corruption finally.
>>> 
>>> You might understand x86 shadow code.
>>> However you must understand IA64 code too.
>>> It would be effective to understand IA64 code by analogy of
>>> x86 shadow code, But they're different.
>> 
>>   Hmm, I don't understand the difference.
>> 
>>   Can you give me suggestion about the difference ?
> 
> The Xen/IA64 p2m table is lockless while Xen/x86 shadow p2m table 
> is protected by shadow_lock()/shadow_unlock().
> This is a burden to the Xen/IA64 p2m maintenance.
> So we must be very carefull when modifying it. 
> Especially we must be aware of memory ordering.
> This is the reason why volatile is sprinckled.
> 
> In Xen/IA64 p2m case
> page reference count must be increased before you add the new entry,
> page reference count must be decreased after removing the entry.
> The only exception is relinquish_pte() because it assumes that 
> the p2m itself is freed. (But this assumption is wrong.)
> However Xen/x86 shadow p2m doesn't care abount page reference count.
> 
> The blktap patches which I sent out last night impose a one more new rule
> which is related to PGC_allocated flag.
> The patch introduces _PAGE_PGC_ALLOCATED.
> When the p2m entry is removed and _PAGE_PGC_ALLOCATED bit is set,
> something like
> if (pte_pgc_allocated(old_pte)) {
>     if (test_and_clear(_PGC_allocated, &page->count_info))
>       put_page(page)
> }
> must be done. domain_put_page() takes care of it.
> 
> Thanks.
> -- 
> yamahata
> 

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.