[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 07/39] arm/p2m: Move hostp2m init/teardown to individual functions



Hi Julien,


On 10/09/2017 07:15 PM, Julien Grall wrote:
> Hi Sergej,
>

[...]

>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 5e86368010..3a1a38e7af 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1203,27 +1203,65 @@ static void p2m_free_vmid(struct domain *d)
>>       spin_unlock(&vmid_alloc_lock);
>>   }
>>   -void p2m_teardown(struct domain *d)
>> +/* Reset this p2m table to be empty. */
>> +void p2m_flush_table(struct p2m_domain *p2m)
>>   {
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>       struct page_info *pg;
>> +    unsigned int i, j;
>> +    lpae_t *table;
>> +
>> +    if ( p2m->root )
>> +    {
>> +        /* Clear all concatenated root level pages. */
>> +        for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>> +        {
>> +            table = __map_domain_page(p2m->root + i);
>> +
>> +            for ( j = 0; j < LPAE_ENTRIES; j++ )
>> +            {
>> +                lpae_t *entry = table + j;
>> +
>> +                /*
>> +                 * Individual altp2m views can be flushed, whilst
>> altp2m is
>> +                 * active. To avoid inconsistencies on CPUs that
>> continue to
>> +                 * use the views to be flushed (e.g., see
>> altp2m_reset), we
>> +                 * must remove every p2m entry atomically.
>
> When I read this comment, I wonder how this is safe to clear without
> any locking? At least to prevent multiple instance to modify the P2M
> at the same time. If you expect the caller to do it for you, then an
> ASSERT(...) is necessary at the beginning of the function.
>
> Likely you want this function do the locking.

You are correct, I was assuming the caller to lock the associated p2m.
Because of the fact that it is only the function "altp2m_reset" in
altp2m.c that flushes the p2m's without the purpose of subsequently
destroying it, I believe we should leave locking to the caller and (as
you mentioned above) use an ASSERT to ensure this behavior. Otherwise,
we would lock the p2m's everytime even if they are about to be destroyed
(as p2m_flush_table is called by p2m_teardown_one). I remember you
saying (and I agree at this point) that we should not do that as this
would be wasted cycles. Yet, if you should insist on locking the p2m
inside of p2m_flush_table, I will gladly incorporate the locking into
the upper function.

>
> Also that comment is more suitable on top of the for loop rather than
> here.

Ok, thanks.

>
>> +                 */
>> +                p2m_remove_pte(entry, p2m->clean_pte);
>> +            }
>> +        }
>
> You still haven't address my comment regarding the overhead you
> introduce whilst tearing down a P2M (e.g when the domain is destroyed).
>

I believe you are referring to the following comments:

> Also, this adds a small overhead when tearing down a p2m because the
> clear is not necessary. 

and

> You seem to forget the p2m teardown is also called during domain
> destruction.

Sorry, I must have missed it. To increase performance, at this point, we
could either (i) differentiate between altp2m views and the hostp2m (as
the hostp2m does not have to be cleared before it gets freed) or (ii) we
could introduce another function (e.g. p2m_clear_table) that is
responsible for doing just that. In this way, we could call
p2m_clear_table from altp2m_reset (the only place where we actually need
to clear altp2m views without subsequently destroying them) would not
need to clear the p2m's everytime we destroy one. Also, we sould need to
lock the p2m only in p2m_clear_table and not inside of p2m_flush_table
or one of its caller. What would you prefer?

>> +    }
>> +
>> +    /*
>> +     * Flush TLBs before releasing remaining intermediate p2m page
>> tables to
>> +     * prevent illegal access to stalled TLB entries.
>> +     */
>> +    p2m_flush_tlb(p2m);
>
> Again, this is called by p2m_flush_table where the P2M may not have
> been allocated because the initialization failed. So trying to flush
> TLB may lead to a panic in Xen (the vttbr is invalid).
>
> Furthermore we already flush the TLBs when creating the domain (see
> p2m_alloc_table). So you add yet another overhead.
>

Right. Yet, we must flush the TLBs after clearing (resetting the altp2m
views through altp2m_reset) the views. That is, if you should be ok with
my suggestion above (introducing p2m_clear_table), we could limit
flushing the TLBs only to the code that actually resets the views.

>>   +    /* Free the rest of the trie pages back to the paging pool. */
>>       while ( (pg = page_list_remove_head(&p2m->pages)) )
>>           free_domheap_page(pg);
>>   +    p2m->lowest_mapped_gfn = INVALID_GFN;
>> +    p2m->max_mapped_gfn = _gfn(0);
>> +}
>> +
>> +void p2m_teardown_one(struct p2m_domain *p2m)
>> +{
>> +    p2m_flush_table(p2m);
>> +
>>       if ( p2m->root )
>>           free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>         p2m->root = NULL;
>>   -    p2m_free_vmid(d);
>> +    p2m_free_vmid(p2m->domain);
>
> This is a bit odd to read given the VMID is per P2M. Likely you want
> your patches #9 and #10 before this patch.

Ok, thank you.

>
>>         radix_tree_destroy(&p2m->mem_access_settings, NULL);
>>   }
>>   -int p2m_init(struct domain *d)
>> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
>>   {
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>       int rc = 0;
>>       unsigned int cpu;
>>   @@ -1268,6 +1306,32 @@ int p2m_init(struct domain *d)
>>       return rc;
>>   }
>>   +static void p2m_teardown_hostp2m(struct domain *d)
>> +{
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +    p2m_teardown_one(p2m);
>> +}
>> +
>> +void p2m_teardown(struct domain *d)
>> +{
>> +    p2m_teardown_hostp2m(d);
>> +}
>> +
>> +static int p2m_init_hostp2m(struct domain *d)
>> +{
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +    p2m->p2m_class = p2m_host;
>> +
>> +    return p2m_init_one(d, p2m);
>> +}
>> +
>> +int p2m_init(struct domain *d)
>> +{
>> +    return p2m_init_hostp2m(d);
>> +}
>
> Please explain in the commit message why you need to introduce
> p2m_init/p2m_teardown that just call p2m_init_one/p2m_teardown_hostp2m.

I will extend the commit message by stating that we will fill p2m_init
in one of the future commits by adding an altp2m initialization routine.
Thanks.

>
>> +
>>   /*
>>    * The function will go through the p2m and remove page reference
>> when it
>>    * is required. The mapping will be removed from the p2m.
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 332d74f11c..9bb38e689a 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -295,6 +295,15 @@ static inline int guest_physmap_add_page(struct
>> domain *d,
>>     mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>>   +/* Flushes the page table held by the p2m. */
>> +void p2m_flush_table(struct p2m_domain *p2m);
>> +
>> +/* Initialize the p2m structure. */
>> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m);
>> +
>> +/* Release resources held by the p2m structure. */
>> +void p2m_teardown_one(struct p2m_domain *p2m);
>> +
>>   /*
>>    * Populate-on-demand
>>    */
>>

Thanks,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.