[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 1/3] x86/tlb: introduce a flush HVM ASIDs flag

On 15.04.2020 13:49, Roger Pau Monné wrote:
> On Tue, Apr 14, 2020 at 05:06:23PM +0200, Jan Beulich wrote:
>> On 14.04.2020 16:53, Roger Pau Monné wrote:
>>> On Tue, Apr 14, 2020 at 03:50:15PM +0200, Jan Beulich wrote:
>>>> On 14.04.2020 13:19, Roger Pau Monné wrote:
>>>>>>> I think this should work, but I would rather do it in a separate
>>>>>>> patch.
>>>>>> Yes, just like the originally (wrongly, as you validly say) suggested
>>>>>> full removal of them, putting this in a separate patch would indeed
>>>>>> seem better.
>>>>> Would you like me to resend with the requested fix to
>>>>> paging_log_dirty_range (ie: drop the FLUSH_TLB and only call
>>>>> flush_mask for HAP guests running on AMD) then?
>>>> Well, ideally I'd see that function also make use of the intended
>>>> new helper function, if at all possible (and suitable).
>>> Oh, OK. Just to make sure I understand what you are asking for, you
>>> would like me to resend introducing the new guest_flush_tlb_mask
>>> helper and use it in the flush_tlb_mask callers modified by this
>>> patch?
>> Yes (and I now realize it may not make sense to split it off into a
>> separate change).
> I could do a pre-patch that introduces guest_flush_tlb_mask as a
> simple wrapper around flush_tlb_mask and replace the callers that I
> modify in this patch. Then this patch would only introduce
> FLUSH_HVM_ASID_CORE and modify guest_flush_tlb_mask to use it when
> required.
> It might make it more complicated to see which callers require the
> ASID flush, but if you think it's better I can arrange the patches in
> that way.

No, I think it's beteer to leave as a single patch. The idea with
splitting was that you'd (fully) take care of some of the sites
needing modification ahead of what is now patch 1. I.e. this would
have been a suitable approach only if some use sites could really
have the call dropped altogether.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.