[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag



On Fri, Mar 20, 2020 at 02:16:47PM +0100, Jan Beulich wrote:
> On 20.03.2020 10:01, Roger Pau Monné wrote:
> > On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
> >> On 19.03.2020 20:07, Julien Grall wrote:
> >>> On 19/03/2020 18:43, Roger Pau Monné wrote:
> >>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
> >>>>>
> >>>>>
> >>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
> >>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
> >>>>>>   >> Why can't you keep flush_tlb_mask() here?
> >>>>>>
> >>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
> >>>>>> changes to the phymap require an ASID flush on AMD hardware.
> >>>>>
> >>>>> I am afraid this does not yet explain me why flush_tlb_mask() could not 
> >>>>> be
> >>>>> updated so it flush the ASID on AMD hardware.
> >>>>
> >>>> Current behavior previous to this patch is to flush the ASIDs on
> >>>> every TLB flush.
> >>>>
> >>>> flush_tlb_mask is too widely used on x86 in places where there's no
> >>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
> >>>> when running nested, since those assisted flushes performed by L0
> >>>> don't flush the L2 guests TLBs.
> >>>>
> >>>> I could keep current behavior and leave flush_tlb_mask also flushing the
> >>>> ASIDs, but that seems wrong as the function doesn't have anything in
> >>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
> >>>
> >>> I agree the name is confusing, I had to look at the implementation to 
> >>> understand what it does.
> >>>
> >>> How about renaming (or introducing) the function to 
> >>> flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
> >>
> >> And this would then flush _only_ guest TLBs?
> > 
> > No, I think from Julien's proposal (if I understood it correctly)
> > flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
> > previous to this patch (flush Xen's TLBs + HVM ASIDs).
> > 
> >>>> I would rather prefer the default to be to not flush the
> >>>> ASIDs, so that users need to specify so by passing the flag to
> >>>> flusk_mask.
> >>> That's x86 choice. For common, I would rather no introduce those flags 
> >>> until we have another arch that make use of it.
> >>
> >> The flags should perhaps indeed remain x86-specific, but suitable
> >> wrappers usable from common code should exist (as you suggest
> >> below).
> > 
> > I don't have a strong opinion re naming, are you OK with the names
> > proposed above?
> 
> Well, no - imo a function named e.g. flush_tlb_all_guests_cpumask() is
> not supposed to flush any host TLBs. But I'll also reply to Julien's
> subsequent reply in a minute.

It seems like the implementation of flush_tlb_mask on ARM and x86
already has different meanings, as the ARM one only flushes guests
TLBs but not Xen's one.

Alternatively I could code this as:

static inline void filtered_flush_tlb_mask(uint32_t tlbflush_timestamp)
{
    cpumask_t mask;

    cpumask_copy(&mask, &cpu_online_map);
    tlbflush_filter(&mask, tlbflush_timestamp);
    if ( !cpumask_empty(&mask) )
    {
        perfc_incr(need_flush_tlb_flush);
#if CONFIG_X86
        /*
         * filtered_flush_tlb_mask is used after modifying the p2m in
         * populate_physmap, Xen needs to trigger an ASID tickle as this is a
         * requirement on AMD hardware.
         */
        flush_mask(&mask, FLUSH_TLB | FLUSH_HVM_ASID_CORE);
#else
        flush_tlb_mask(&mask);
#endif
    }
}

And we can see later about getting rid of the filtered_flush_tlb_mask
calls in populate_physmap and alloc_heap_pages if they are really
unneeded, which will allows us to get rid of the function altogether.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.