[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.7] xen/arm: p2m: Perform local TLB invalidation on vCPU migration



commit ac8d90e10eed6553ec86b6dd32b660d64b899a39
Author:     Julien Grall <julien.grall@xxxxxxx>
AuthorDate: Fri Mar 17 12:11:45 2017 -0700
Commit:     Stefano Stabellini <sstabellini@xxxxxxxxxx>
CommitDate: Fri Mar 17 12:21:46 2017 -0700

    xen/arm: p2m: Perform local TLB invalidation on vCPU migration
    
    The ARM architecture allows an OS to have per-CPU page tables, as it
    guarantees that TLBs never migrate from one CPU to another.
    
    This works fine until this is done in a guest. Consider the following
    scenario:
        - vcpu-0 maps P to V
        - vpcu-1 maps P' to V
    
    If run on the same physical CPU, vcpu-1 can hit in TLBs generated by
    vcpu-0 accesses, and access the wrong physical page.
    
    The solution to this is to keep a per-p2m map of which vCPU ran the last
    on each given pCPU and invalidate local TLBs if two vPCUs from the same
    VM run on the same CPU.
    
    Unfortunately it is not possible to allocate per-cpu variable on the
    fly. So for now the size of the array is NR_CPUS, this is fine because
    we still have space in the structure domain. We may want to add an
    helper to allocate per-cpu variable in the future.
    
    Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
    Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
---
 xen/arch/arm/p2m.c        | 25 +++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h |  3 +++
 2 files changed, 28 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f0169aa..9162f5b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -96,6 +96,8 @@ void p2m_save_state(struct vcpu *p)
 void p2m_restore_state(struct vcpu *n)
 {
     register_t hcr;
+    struct p2m_domain *p2m = &n->domain->arch.p2m;
+    uint8_t *last_vcpu_ran;
 
     hcr = READ_SYSREG(HCR_EL2);
 
@@ -112,6 +114,17 @@ void p2m_restore_state(struct vcpu *n)
 
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
+
+    last_vcpu_ran = &p2m->last_vcpu_ran[smp_processor_id()];
+
+    /*
+     * Flush local TLB for the domain to prevent wrong TLB translation
+     * when running multiple vCPU of the same domain on a single pCPU.
+     */
+    if ( *last_vcpu_ran != INVALID_VCPU_ID && *last_vcpu_ran != n->vcpu_id )
+        flush_tlb_local();
+
+    *last_vcpu_ran = n->vcpu_id;
 }
 
 void flush_tlb_domain(struct domain *d)
@@ -1422,6 +1435,7 @@ int p2m_init(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     int rc = 0;
+    unsigned int cpu;
 
     spin_lock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
@@ -1447,6 +1461,17 @@ int p2m_init(struct domain *d)
 err:
     spin_unlock(&p2m->lock);
 
+    /*
+     * Make sure that the type chosen to is able to store the an vCPU ID
+     * between 0 and the maximum of virtual CPUS supported as long as
+     * the INVALID_VCPU_ID.
+     */
+    BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPUS);
+    BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_ID);
+
+    for_each_possible_cpu(cpu)
+       p2m->last_vcpu_ran[cpu] = INVALID_VCPU_ID;
+
     return rc;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index d240d1e..cb19350 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -66,6 +66,9 @@ struct p2m_domain {
     /* Radix tree to store the p2m_access_t settings as the pte's don't have
      * enough available bits to store this information. */
     struct radix_tree_root mem_access_settings;
+
+    /* Keeping track on which CPU this p2m was used and for which vCPU */
+    uint8_t last_vcpu_ran[NR_CPUS];
 };
 
 /* List of possible type for each page in the p2m entry.
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.7

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.