[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.



Hi Julien,

On 07/05/2016 05:37 PM, Julien Grall wrote:
> 
> 
> On 05/07/16 15:48, Sergej Proskurin wrote:
>> On 07/04/2016 10:32 PM, Julien Grall wrote:
>>> On 04/07/2016 12:45, Sergej Proskurin wrote:
>>>> +        p2m_load_altp2m_VTTBR(n);
>>>> +    else
>>>> +        p2m_load_VTTBR(n->domain);
>>>> +
>>>>       isb();
>>>>
>>>>       if ( is_32bit_domain(n->domain) )
>>>> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>>>>   void flush_tlb_domain(struct domain *d)
>>>>   {
>>>>       unsigned long flags = 0;
>>>> +    struct vcpu *v = NULL;
>>>>
>>>> -    /* Update the VTTBR if necessary with the domain d. In this case,
>>>> -     * it's only necessary to flush TLBs on every CPUs with the
>>>> current VMID
>>>> -     * (our domain).
>>>> +    /*
>>>> +     * Update the VTTBR if necessary with the domain d. In this
>>>> case, it is only
>>>> +     * necessary to flush TLBs on every CPUs with the current VMID
>>>> (our
>>>> +     * domain).
>>>>        */
>>>>       if ( d != current->domain )
>>>>       {
>>>>           local_irq_save(flags);
>>>> -        p2m_load_VTTBR(d);
>>>> -    }
>>>>
>>>> -    flush_tlb();
>>>> +        /* If altp2m is active, update VTTBR and flush TLBs of every
>>>> VCPU */
>>>> +        if ( altp2m_active(d) )
>>>> +        {
>>>> +            for_each_vcpu( d, v )
>>>> +            {
>>>> +                p2m_load_altp2m_VTTBR(v);
>>>> +                flush_tlb();
>>>> +            }
>>>> +        }
>>>> +        else
>>>> +        {
>>>> +            p2m_load_VTTBR(d);
>>>> +            flush_tlb();
>>>> +        }
>>>
>>> Why do you need to do a such things? If the VMID is the same, a single
>>> call to flush_tlb() will nuke all the entries for the given TLBs.
>>>
>>> If the VMID is not shared, then I am not even sure why you would need
>>> to flush the TLBs for all the altp2ms.
>>>
>>
>> If the VMID is shared between multiple views and we would change one
>> entry in one specific view, we might reuse an entry that is not part of
>> the currently active altp2m view of the domain. And even if we assign
>> unique VMIDs to the individual altp2m views (as discussed in patch #04),
>> as far as I understand, we will still need to flush the mappings of all
>> altp2ms at this point because (AFAIK) changes in individual altp2m views
>> will still need to be propagated to the TLBs. Please correct me, if I am
>> wrong at this point.
> 
> You seem to be really confused how TLB flush instructions work on ARM.
> The function flush_tlb() will flush TLBs on all the processor for the
> current VMID. If the VMID is shared, then calling N-times the flush with
> the same VMID will just be a waste of processor cycle.
> 

True, you are right. I was confusing things, sorry. Thank you for
putting that straight.

> Now, if the VMID is not shared (and based on your current series):
> flush_tlb_domain is called in two places:
>    - p2m_alloc_table
>    - apply_p2m_changes
> 
> For the former, it allocates root table for a specific p2m. For the
> latter, the changes is only done for a specific p2m, and those changes
> are currently not replicated in all the p2ms. So flushing the TLBs for
> each altp2m view do not seem useful here too.
> 

I agree on this point as well. However, we should maybe think of another
name for flush_tlb_domain. Especially, if we do not flush the entire
domain (including all currently active altp2m views). What do you think?

Also, we would need another flushing routine that takes a specific p2m
as argument so that its VTTBR can be considered while flushing the TLBs.

>>
>>> I have looked to Xen with this series applied and noticed that when
>>> you remove a page from the hostp2m, the mapping in the altp2m is not
>>> removed. So the guest may use a page that would have been freed
>>> previously. Did I miss something?
>>>
>>
>> When altp2m is active, the hostp2m is not used. All changes are applied
>> directly to the individual altp2m views. As far as I know, the only
>> situations, where a mapping is removed from a specific p2m view is in
>> the functions unmap_regions_rw_cache, unmap_mmio_regions, and
>> guest_physmap_remove_page. Each one of these functions provide, however
>> the hostp2m to the function apply_p2m_changes, where the mapping is
>> eventually removed. So, we definitely remove mappings from the hostp2m.
>> However, these would not be removed from the associated altp2m views.
>>
>> Can you state a scenario, when and why pages might need to be removed at
>> run time from the hostp2m? In that case, maybe it would make sense to
>> extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
>> and guest_physmap_remove_page) to additionally remove the mappings from
>> all available altp2m views?
> 
> All the functions, you mentioned can be called after the domain has been
> created. If you grep guest_physmap_remove_page in the code source, you
> will find that it is used to unmap grant entry from the memory (see
> replace_grant_host_mapping) or when decreasing the memory reservation
> (see do_memory_op).
> 
> Note that, from my understanding, x86 has the same problem.
> 

Yes, the x86 implementation works similarly to the one for ARM. I will
need to think about that. One solution could simply extend the
previously mentioned functions by unmapping the particular entry in all
currently available altp2m views (assuming, we allocate the altp2m views
on demand, as discussed in patch #05).

Whatever the solution will be, it will need to be ported to the x86
implementation as well.

Thank you for pointing this out.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.