|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1] x86/hvm/viridian: flush remote tlbs by hypercall
>>> On 19.11.15 at 10:43, <paul.durrant@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2452,6 +2452,13 @@ int hvm_vcpu_initialise(struct vcpu *v)
> if ( rc != 0 )
> goto fail6;
>
> + if ( is_viridian_domain(d) )
> + {
> + rc = viridian_init(v);
I think these would better be viridian_vcpu_{,de}init().
> @@ -512,9 +521,25 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
> return 1;
> }
>
> +int viridian_init(struct vcpu *v)
> +{
> + v->arch.hvm_vcpu.viridian.flush_cpumask = xmalloc(cpumask_t);
Why not alloc_cpumask_var()?
> + case HvFlushVirtualAddressSpace:
> + case HvFlushVirtualAddressList:
> + {
> + struct page_info *page;
> + unsigned int page_off;
> + uint8_t *input_params;
> + uint64_t vcpu_mask;
Is there a built in limit to 64 CPUs in the Hyper-V spec?
> + uint64_t flags;
> + cpumask_t *pcpu_mask = curr->arch.hvm_vcpu.viridian.flush_cpumask;
> + struct vcpu *v;
> +
> + /*
> + * See Microsoft Hypervisor Top Level Spec. sections 12.4.2
> + * and 12.4.3.
> + */
> + perfc_incr(mshv_flush);
> +
> + /* These hypercalls should never use the fast-call convention. */
> + status = HV_STATUS_INVALID_PARAMETER;
> + if ( input.fast )
> + break;
> +
> + /* Get input parameters. */
> + page = get_page_from_gfn(currd, paddr_to_pfn(input_params_gpa),
> + NULL, P2M_ALLOC);
Does this really need to be P2M_ALLOC?
> + if ( !page )
> + break;
> +
> + page_off = input_params_gpa & (PAGE_SIZE - 1);
> +
> + input_params = __map_domain_page(page);
> + input_params += page_off;
> +
> + /*
> + * Address space is ignored since we cannot restrict the flush
> + * to a particular guest CR3 value.
> + */
> +
> + flags = *(uint64_t *)(input_params + 8);
> + vcpu_mask = *(uint64_t *)(input_params + 16);
What if one of these crosses the page boundary?
> + input_params -= page_off;
> + unmap_domain_page(input_params);
> +
> + put_page(page);
> +
> + /*
> + * It is not clear from the spec. if we are supposed to
> + * include current virtual CPU in the set or not in this case,
> + * so err on the safe side.
> + */
> + if ( flags & HV_FLUSH_ALL_PROCESSORS )
> + vcpu_mask = ~0ul;
> +
> + cpumask_clear(pcpu_mask);
> +
> + /*
> + * For each specified virtual CPU flush all ASIDs to invalidate
> + * TLB entries the next time it is scheduled and then, if it
> + * is currently running, add its physical CPU to a mask of
> + * those which need to be interrupted to force a flush. (Since
> + * ASIDs have already been flushed they actually only need
> + * forcing out of non-root mode, but this is as good a way as
> + * any).
> + */
> + for_each_vcpu ( currd, v )
> + {
> + if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
> + continue;
> +
> + hvm_asid_flush_vcpu(v);
> + if ( v->is_running )
> + cpumask_set_cpu(v->processor, pcpu_mask);
> + }
> +
> + if ( !cpumask_empty(pcpu_mask) )
> + flush_tlb_mask(pcpu_mask);
Is this correct when scheduling happens between calculating
pcpu_mask and actually using it?
> --- a/xen/include/asm-x86/hvm/viridian.h
> +++ b/xen/include/asm-x86/hvm/viridian.h
> @@ -22,6 +22,7 @@ union viridian_apic_assist
> struct viridian_vcpu
> {
> union viridian_apic_assist apic_assist;
> + cpumask_var_t flush_cpumask;
This suggests you even introduce a build problem without using
alloc_cpumask_var() above.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |