[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] viridian: fix the HvFlushVirtualAddress/List hypercall implementation



> -----Original Message-----
> From: Juergen Gross [mailto:jgross@xxxxxxxx]
> Sent: 14 February 2019 12:35
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Roger Pau Monne
> <roger.pau@xxxxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH v2] viridian: fix the
> HvFlushVirtualAddress/List hypercall implementation
> 
> On 14/02/2019 13:10, Paul Durrant wrote:
> > The current code uses hvm_asid_flush_vcpu() but this is insufficient for
> > a guest running in shadow mode, which results in guest crashes early in
> > boot if the 'hcall_remote_tlb_flush' is enabled.
> >
> > This patch, instead of open coding a new flush algorithm, adapts the one
> > already used by the HVMOP_flush_tlbs Xen hypercall. The implementation
> is
> > modified to allow TLB flushing a subset of a domain's vCPUs. A callback
> > function determines whether or not a vCPU requires flushing. This
> mechanism
> > was chosen because, while it is the case that the currently implemented
> > viridian hypercalls specify a vCPU mask, there are newer variants which
> > specify a sparse HV_VP_SET and thus use of a callback will avoid needing
> to
> > expose details of this outside of the viridian subsystem if and when
> those
> > newer variants are implemented.
> >
> > NOTE: Use of the common flush function requires that the hypercalls are
> >       restartable and so, with this patch applied, viridian_hypercall()
> >       can now return HVM_HCALL_preempted. This is safe as no
> modification
> >       to struct cpu_user_regs is done before the return.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > ---
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> > Cc: "Roger Pau Monné" <roger.pau@xxxxxxxxxx>
> >
> > v2:
> >  - Use cpumask_scratch
> 
> That's not a good idea. cpumask_scratch may be used from other cpus as
> long as the respectice scheduler lock is being held. See the comment in
> include/xen/sched-if.h:
> 
> /*
>  * Scratch space, for avoiding having too many cpumask_t on the stack.
>  * Within each scheduler, when using the scratch mask of one pCPU:
>  * - the pCPU must belong to the scheduler,
>  * - the caller must own the per-pCPU scheduler lock (a.k.a. runqueue
>  *   lock).
>  */
> 
> So please don't use cpumask_scratch outside the scheduler!

Ah, yes, it's because of cpumask_scratch_cpu()... I'd indeed missed that. In 
which case a dedicated flush_cpumask is still required.

  Paul

> 
> 
> Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.