[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.14] x86/hap: use get_gfn_type in hap_update_paging_modes



On 17.06.2020 16:49, Tamas K Lengyel wrote:
> On Wed, Jun 17, 2020 at 8:24 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 17.06.2020 15:43, Tamas K Lengyel wrote:
>>> On Wed, Jun 17, 2020 at 7:36 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>
>>>> On 17.06.2020 15:31, Tamas K Lengyel wrote:
>>>>> On Wed, Jun 17, 2020 at 7:28 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>>
>>>>>> On 17.06.2020 15:21, Tamas K Lengyel wrote:
>>>>>>> On Wed, Jun 17, 2020 at 7:04 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>>>>
>>>>>>>> On 17.06.2020 15:00, Tamas K Lengyel wrote:
>>>>>>>>> On Wed, Jun 17, 2020 at 3:59 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>>>>>> If there are code paths of both kinds, which approach to use in
>>>>>>>>>> vmx_load_pdptrs() may need to be chosen based on what
>>>>>>>>>> paging_locked_by_me() returns. Or perhaps an unlocked query is
>>>>>>>>>> fine in either case?
>>>>>>>>>
>>>>>>>>> Perhaps adjusting vmx_load_pdptrs to chose the unlocked query would be
>>>>>>>>> fine. But at that point what is the reason for having the lock
>>>>>>>>> ordering at all? Why not just have a single recursive lock and avoid
>>>>>>>>> issues like this altogether?
>>>>>>>>
>>>>>>>> With just a single lock, contention problems we already know we
>>>>>>>> have would be even worse. When the current locking model was
>>>>>>>> introduced, there was actually a plan to make gfn_lock() more
>>>>>>>> fine-grained (i.e. not simply "de-generate" to p2m_lock()), for
>>>>>>>> example.
>>>>>>>
>>>>>>> Sigh. Well, I've been checking and adjust vmx_load_pdptrs to use an
>>>>>>> unlocked query doesn't seem as straightforward because, well, there is
>>>>>>> no unlocked version of p2m_get_page_from_gfn which would also do the
>>>>>>> "fixups".
>>>>>>
>>>>>> Which fixups do we need here, in particular? Of course, whenever
>>>>>> any fixups get done, the operation can't be lock-less.
>>>>>>
>>>>>>> What seems redundant to me though is that
>>>>>>> hap_update_paging_modes takes both the p2m_lock via get_gfn PLUS the
>>>>>>> paging_lock. Does it really need to take the paging_lock?
>>>>>>
>>>>>> From mm-locks.h's comments:
>>>>>>
>>>>>>  * For HAP, it protects the NPT/EPT tables and mode changes.
>>>>>
>>>>> We do the population of the EPT as part of fork_page() if there was a
>>>>> hole in the p2m when the query was issued using P2M_ALLOC (or
>>>>> P2M_UNSHARE). I checked and without the paging lock held it throws up
>>>>> at hap_alloc's ASSERT.. So yea, currently I don't think we have a
>>>>> better route then what I currently sent in.
>>>>
>>>> You didn't answer my question regarding the "fixups" needed, so
>>>> for the moment it's not clear to me yet whether indeed there's
>>>> no better way.
>>>
>>> Umm, I did. The fixups entail populating the EPT from the parent as I
>>> described above.
>>
>> Isn't this taken care of by the new call to get_gfn_type() which you add?
>> As said before, I think at the point we want to obtain the PDPTs all
>> other adjustments and arrangements should have been done already, by
>> higher layers. This code should have no need to do anything beyond a
>> simple lookup.
> 
> I don't really know what else to say. There are multiple paths leading
> to vmx_load_pdptrs, some take the paging_lock while some don't. In
> this particular case we can do the fixups earlier as I do in this
> patch because there happens to be a lookup before the paging_lock is
> taken but in other cases there isn't such a route so removing
> P2M_UNSHARE from vmx_load_pdptrs is not an option.

I disagree (because such missing unshare requests could be put
elsewhere), but let me ask another question then: Why is it that
vmx_load_pdptrs() needs to unshare? The function only reads from
the page. Even if the page's content changed later on, we wouldn't
care, as there's no coherence of the PDPTRs once loaded.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.