[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1] x86/hvm/viridian: flush remote tlbs by hypercall



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 19 November 2015 10:18
> To: Paul Durrant
> Cc: Andrew Cooper; Ian Campbell; Wei Liu; Ian Jackson; Stefano Stabellini;
> xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v1] x86/hvm/viridian: flush remote tlbs by hypercall
> 
> >>> On 19.11.15 at 10:43, <paul.durrant@xxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -2452,6 +2452,13 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >      if ( rc != 0 )
> >          goto fail6;
> >
> > +    if ( is_viridian_domain(d) )
> > +    {
> > +        rc = viridian_init(v);
> 
> I think these would better be viridian_vcpu_{,de}init().
> 

Sure.

> > @@ -512,9 +521,25 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
> >      return 1;
> >  }
> >
> > +int viridian_init(struct vcpu *v)
> > +{
> > +    v->arch.hvm_vcpu.viridian.flush_cpumask = xmalloc(cpumask_t);
> 
> Why not alloc_cpumask_var()?
> 

Because I didn't know about it. Now I do :-)

> > +    case HvFlushVirtualAddressSpace:
> > +    case HvFlushVirtualAddressList:
> > +    {
> > +        struct page_info *page;
> > +        unsigned int page_off;
> > +        uint8_t *input_params;
> > +        uint64_t vcpu_mask;
> 
> Is there a built in limit to 64 CPUs in the Hyper-V spec?
> 

Yes. The spec. includes the following (in section 12.4.2):

"Future versions of the hypervisor may support more than 64 virtual processors 
per partition. In that case, a new field will be added to the flags value that 
allows the caller to define the "processor bank" to which the processor mask 
applies."

> > +        uint64_t flags;
> > +        cpumask_t *pcpu_mask = curr-
> >arch.hvm_vcpu.viridian.flush_cpumask;
> > +        struct vcpu *v;
> > +
> > +        /*
> > +         * See Microsoft Hypervisor Top Level Spec. sections 12.4.2
> > +         * and 12.4.3.
> > +         */
> > +        perfc_incr(mshv_flush);
> > +
> > +        /* These hypercalls should never use the fast-call convention. */
> > +        status = HV_STATUS_INVALID_PARAMETER;
> > +        if ( input.fast )
> > +            break;
> > +
> > +        /* Get input parameters. */
> > +        page = get_page_from_gfn(currd, paddr_to_pfn(input_params_gpa),
> > +                                 NULL, P2M_ALLOC);
> 
> Does this really need to be P2M_ALLOC?

I guess not. It won't be zero-filled so there's no chance of it being swept 
from under us.

> 
> > +        if ( !page )
> > +            break;
> > +
> > +        page_off = input_params_gpa & (PAGE_SIZE - 1);
> > +
> > +        input_params = __map_domain_page(page);
> > +        input_params += page_off;
> > +
> > +        /*
> > +         * Address space is ignored since we cannot restrict the flush
> > +         * to a particular guest CR3 value.
> > +         */
> > +
> > +        flags = *(uint64_t *)(input_params + 8);
> > +        vcpu_mask = *(uint64_t *)(input_params + 16);
> 
> What if one of these crosses the page boundary?
> 

Good point. I'd assumed from the spec. that they would not cross a boundary but 
reading again I can't see why I assumed that. I'll look at using 
copy_to/from_guest() instead.

> > +        input_params -= page_off;
> > +        unmap_domain_page(input_params);
> > +
> > +        put_page(page);
> > +
> > +        /*
> > +         * It is not clear from the spec. if we are supposed to
> > +         * include current virtual CPU in the set or not in this case,
> > +         * so err on the safe side.
> > +         */
> > +        if ( flags & HV_FLUSH_ALL_PROCESSORS )
> > +            vcpu_mask = ~0ul;
> > +
> > +        cpumask_clear(pcpu_mask);
> > +
> > +        /*
> > +         * For each specified virtual CPU flush all ASIDs to invalidate
> > +         * TLB entries the next time it is scheduled and then, if it
> > +         * is currently running, add its physical CPU to a mask of
> > +         * those which need to be interrupted to force a flush. (Since
> > +         * ASIDs have already been flushed they actually only need
> > +         * forcing out of non-root mode, but this is as good a way as
> > +         * any).
> > +         */
> > +        for_each_vcpu ( currd, v )
> > +        {
> > +            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
> > +                continue;
> > +
> > +            hvm_asid_flush_vcpu(v);
> > +            if ( v->is_running )
> > +                cpumask_set_cpu(v->processor, pcpu_mask);
> > +        }
> > +
> > +        if ( !cpumask_empty(pcpu_mask) )
> > +            flush_tlb_mask(pcpu_mask);
> 
> Is this correct when scheduling happens between calculating
> pcpu_mask and actually using it?
> 
> > --- a/xen/include/asm-x86/hvm/viridian.h
> > +++ b/xen/include/asm-x86/hvm/viridian.h
> > @@ -22,6 +22,7 @@ union viridian_apic_assist
> >  struct viridian_vcpu
> >  {
> >      union viridian_apic_assist apic_assist;
> > +    cpumask_var_t flush_cpumask;
> 
> This suggests you even introduce a build problem without using
> alloc_cpumask_var() above.
> 

Ok.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.