[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v6 4/6] xen/arm: vgic: Optimize the way to store the target vCPU in the rank



Xen is currently directly storing the value of GICD_ITARGETSR register
(for GICv2) and GICD_IROUTER (for GICv3) in the rank. This makes the
emulation of the registers access very simple but makes the code to get
the target vCPU for a given vIRQ more complex.

While the target vCPU of an vIRQ is retrieved every time an vIRQ is
injected to the guest, the access to the register occurs less often.

So the data structure should be optimized for the most common case
rather than the inverse.

This patch introduces the usage of an array to store the target vCPU for
every interrupt in the rank. This will make the code to get the target
very quick. The emulation code will now have to generate the GICD_ITARGETSR
and GICD_IROUTER register for read access and split it to store in a
convenient way.

With the new way to store the target vCPU, the structure vgic_irq_rank
is shrunk down from 320 bytes to 92 bytes. This is saving about 228
bytes of memory allocated separately per vCPU.

Note that with these changes, any read to those register will list only
the target vCPU used by Xen. As the spec is not clear whether this is a
valid choice or not, OSes which have a different interpretation of the
spec (i.e OSes which perform read-modify-write operations on these
registers) may not boot anymore on Xen. Although, I think this is fair
trade between memory usage in Xen (1KB less on a domain using 4 vCPUs
with no SPIs) and a strict interpretation of the spec (though all the
cases are not clearly defined).

Furthermore, the implementation of the callback get_target_vcpu is now
exactly the same. Consolidate the implementation in the common vGIC code
and drop the callback.

Finally take the opportunity to fix coding style and replace "irq" by
"virq" to make clear that we are dealing with virtual IRQ in section we
are modifying.

Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>

---
    The real reason is I'd like to drop vgic_byte_* helpers in favor of more
    generic access helpers. Although it would make the code to retrieve the
    priority more complex. So reworking the code to get the target vCPU
    was the best solution.

    Changes in v5:
        - NR_TARGET_PER_REG has been renamed to NR_TARGETS_PER_ITARGETSR
        - NR_BIT_PER_TARGET has been renamed to NR_BITS_PER_TARGET
        - Fix comments
        - s/NR_BYTE_PER_IROUTER/NR_BYTES_PER_IROUTER/
        - Remove the duplicate declaration of virq in vgic_store_itargetsr

    Changes in v4:
        - Move the behavior changes in 2 separate patches:
            * patch #1: Handle correctly byte write in ITARGETSR
            * patch #2: Don't ignore a write in ITARGETSR if one field is 0
        - Update commit message

    Changes in v3:
        - Make clear in the commit message that we rely on a corner case
        of the spec.
        - Typoes

    Changes in v2:
        - Remove get_target_vcpu callback
        - s/irq/virq/ to make clear we are using virtual IRQ
        - Update the commit message
        - Rename vgic_generate_* to vgic_fetch
        - Improve code comment
        - Move the introduction of vgic_rank_init in a separate patch
        - Make use of rank->idx

    Changes in v1:
        - Patch added
---
 xen/arch/arm/vgic-v2.c     |  86 +++++++++++++-------------------
 xen/arch/arm/vgic-v3.c     | 119 ++++++++++++++++++++++++---------------------
 xen/arch/arm/vgic.c        |  45 ++++++++++++-----
 xen/include/asm-arm/vgic.h |  18 +++----
 4 files changed, 137 insertions(+), 131 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index ad2ea0a..62159bf 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -61,8 +61,31 @@ void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase, paddr_t 
csize,
 #define NR_BITS_PER_TARGET  (32U / NR_TARGETS_PER_ITARGETSR)
 
 /*
- * Store an ITARGETSR register. This function only deals with ITARGETSR8
- * and onwards.
+ * Fetch an ITARGETSR register based on the offset from ITARGETSR0. Only
+ * one vCPU will be listed for a given vIRQ.
+ *
+ * Note the offset will be aligned to the appropriate boundary.
+ */
+static uint32_t vgic_fetch_itargetsr(struct vgic_irq_rank *rank,
+                                     unsigned int offset)
+{
+    uint32_t reg = 0;
+    unsigned int i;
+
+    ASSERT(spin_is_locked(&rank->lock));
+
+    offset &= INTERRUPT_RANK_MASK;
+    offset &= ~(NR_TARGETS_PER_ITARGETSR - 1);
+
+    for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++ )
+        reg |= (1 << rank->vcpu[offset]) << (i * NR_BITS_PER_TARGET);
+
+    return reg;
+}
+
+/*
+ * Store an ITARGETSR register in a convenient way and migrate the vIRQ
+ * if necessary. This function only deals with ITARGETSR8 and onwards.
  *
  * Note the offset will be aligned to the appropriate boundary.
  */
@@ -70,7 +93,6 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
                                  unsigned int offset, uint32_t itargetsr)
 {
     unsigned int i;
-    unsigned int regidx = REG_RANK_INDEX(8, offset, DABT_WORD);
     unsigned int virq;
 
     ASSERT(spin_is_locked(&rank->lock));
@@ -92,7 +114,7 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
     for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++, virq++ )
     {
         unsigned int new_target, old_target;
-        uint8_t new_mask, old_mask;
+        uint8_t new_mask;
 
         /*
          * Don't need to mask as we rely on new_mask to fit for only one
@@ -101,7 +123,6 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
         BUILD_BUG_ON((sizeof (new_mask) * 8) != NR_BITS_PER_TARGET);
 
         new_mask = itargetsr >> (i * NR_BITS_PER_TARGET);
-        old_mask = vgic_byte_read(rank->v2.itargets[regidx], i);
 
         /*
          * SPIs are using the 1-N model (see 1.4.3 in ARM IHI 0048B).
@@ -111,10 +132,6 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
          * in the target list
          */
         new_target = ffs(new_mask);
-        old_target = ffs(old_mask);
-
-        /* The current target should always be valid */
-        ASSERT(old_target && (old_target <= d->max_vcpus));
 
         /*
          * Ignore the write request for this interrupt if the new target
@@ -133,7 +150,8 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
 
         /* The vCPU ID always starts from 0 */
         new_target--;
-        old_target--;
+
+        old_target = rank->vcpu[offset];
 
         /* Only migrate the vIRQ if the target vCPU has changed */
         if ( new_target != old_target )
@@ -143,9 +161,7 @@ static void vgic_store_itargetsr(struct domain *d, struct 
vgic_irq_rank *rank,
                              virq);
         }
 
-        /* Bit corresponding to unimplemented CPU is write-ignore. */
-        new_mask &= (1 << d->max_vcpus) - 1;
-        vgic_byte_write(&rank->v2.itargets[regidx], new_mask, i);
+        rank->vcpu[offset] = new_target;
     }
 }
 
@@ -225,8 +241,7 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, 
mmio_info_t *info,
         rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD);
         if ( rank == NULL) goto read_as_zero;
         vgic_lock_rank(v, rank, flags);
-        *r = rank->v2.itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR,
-                                              DABT_WORD)];
+        *r = vgic_fetch_itargetsr(rank, gicd_reg - GICD_ITARGETSR);
         if ( dabt.size == DABT_BYTE )
             *r = vgic_byte_read(*r, gicd_reg);
         vgic_unlock_rank(v, rank, flags);
@@ -446,9 +461,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info,
             itargetsr = r;
         else
         {
-            itargetsr = rank->v2.itargets[REG_RANK_INDEX(8,
-                                    gicd_reg - GICD_ITARGETSR,
-                                    DABT_WORD)];
+            itargetsr = vgic_fetch_itargetsr(rank, gicd_reg - GICD_ITARGETSR);
             vgic_byte_write(&itargetsr, r, gicd_reg);
         }
         vgic_store_itargetsr(v->domain, rank, gicd_reg - GICD_ITARGETSR,
@@ -553,43 +566,16 @@ static const struct mmio_handler_ops 
vgic_v2_distr_mmio_handler = {
     .write = vgic_v2_distr_mmio_write,
 };
 
-static struct vcpu *vgic_v2_get_target_vcpu(struct vcpu *v, unsigned int irq)
-{
-    unsigned long target;
-    struct vcpu *v_target;
-    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
-    ASSERT(spin_is_locked(&rank->lock));
-
-    target = vgic_byte_read(rank->v2.itargets[REG_RANK_INDEX(8,
-                                              irq, DABT_WORD)], irq & 0x3);
-
-    /* 1-N SPI should be delivered as pending to all the vcpus in the
-     * mask, but here we just return the first vcpu for simplicity and
-     * because it would be too slow to do otherwise. */
-    target = find_first_bit(&target, 8);
-    ASSERT(target >= 0 && target < v->domain->max_vcpus);
-    v_target = v->domain->vcpu[target];
-    return v_target;
-}
-
 static int vgic_v2_vcpu_init(struct vcpu *v)
 {
-    int i;
-
-    /* For SGI and PPI the target is always this CPU */
-    for ( i = 0 ; i < 8 ; i++ )
-        v->arch.vgic.private_irqs->v2.itargets[i] =
-              (1<<(v->vcpu_id+0))
-            | (1<<(v->vcpu_id+8))
-            | (1<<(v->vcpu_id+16))
-            | (1<<(v->vcpu_id+24));
+    /* Nothing specific to initialize for this driver */
 
     return 0;
 }
 
 static int vgic_v2_domain_init(struct domain *d)
 {
-    int i, ret;
+    int ret;
     paddr_t cbase, csize;
     paddr_t vbase;
 
@@ -635,11 +621,6 @@ static int vgic_v2_domain_init(struct domain *d)
     if ( ret )
         return ret;
 
-    /* By default deliver to CPU0 */
-    for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ )
-        memset(d->arch.vgic.shared_irqs[i].v2.itargets, 0x1,
-               sizeof(d->arch.vgic.shared_irqs[i].v2.itargets));
-
     register_mmio_handler(d, &vgic_v2_distr_mmio_handler, d->arch.vgic.dbase,
                           PAGE_SIZE, NULL);
 
@@ -649,7 +630,6 @@ static int vgic_v2_domain_init(struct domain *d)
 static const struct vgic_ops vgic_v2_ops = {
     .vcpu_init   = vgic_v2_vcpu_init,
     .domain_init = vgic_v2_domain_init,
-    .get_target_vcpu = vgic_v2_get_target_vcpu,
     .max_vcpus = 8,
 };
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index b5249ff..4f976d4 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -89,18 +89,72 @@ static struct vcpu *vgic_v3_irouter_to_vcpu(struct domain 
*d, uint64_t irouter)
     return d->vcpu[vcpu_id];
 }
 
-static struct vcpu *vgic_v3_get_target_vcpu(struct vcpu *v, unsigned int irq)
-{
-    struct vcpu *v_target;
-    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
+#define NR_BYTES_PER_IROUTER 8U
 
+/*
+ * Fetch an IROUTER register based on the offset from IROUTER0. Only one
+ * vCPU will be listed for a given vIRQ.
+ *
+ * Note the offset will be aligned to the appropriate  boundary.
+ */
+static uint64_t vgic_fetch_irouter(struct vgic_irq_rank *rank,
+                                   unsigned int offset)
+{
     ASSERT(spin_is_locked(&rank->lock));
 
-    v_target = vgic_v3_irouter_to_vcpu(v->domain, rank->v3.irouter[irq % 32]);
+    /* There is exactly 1 vIRQ per IROUTER */
+    offset /= NR_BYTES_PER_IROUTER;
 
-    ASSERT(v_target != NULL);
+    /* Get the index in the rank */
+    offset &= INTERRUPT_RANK_MASK;
 
-    return v_target;
+    return vcpuid_to_vaffinity(rank->vcpu[offset]);
+}
+
+/*
+ * Store an IROUTER register in a convenient way and migrate the vIRQ
+ * if necessary. This function only deals with IROUTER32 and onwards.
+ *
+ * Note the offset will be aligned to the appropriate boundary.
+ */
+static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
+                               unsigned int offset, uint64_t irouter)
+{
+    struct vcpu *new_vcpu, *old_vcpu;
+    unsigned int virq;
+
+    /* There is 1 vIRQ per IROUTER */
+    virq = offset / NR_BYTES_PER_IROUTER;
+
+    /*
+     * The IROUTER0-31, used for SGIs/PPIs, are reserved and should
+     * never call this function.
+     */
+    ASSERT(virq >= 32);
+
+    /* Get the index in the rank */
+    offset &= virq & INTERRUPT_RANK_MASK;
+
+    new_vcpu = vgic_v3_irouter_to_vcpu(d, irouter);
+    old_vcpu = d->vcpu[rank->vcpu[offset]];
+
+    /*
+     * From the spec (see 8.9.13 in IHI 0069A), any write with an
+     * invalid vCPU will lead to the interrupt being ignored.
+     *
+     * But the current code to inject an IRQ is not able to cope with
+     * invalid vCPU. So for now, just ignore the write.
+     *
+     * TODO: Respect the spec
+     */
+    if ( !new_vcpu )
+        return;
+
+    /* Only migrate the IRQ if the target vCPU has changed */
+    if ( new_vcpu != old_vcpu )
+        vgic_migrate_irq(old_vcpu, new_vcpu, virq);
+
+    rank->vcpu[offset] = new_vcpu->vcpu_id;
 }
 
 static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
@@ -726,8 +780,7 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, 
mmio_info_t *info,
                                 DABT_DOUBLE_WORD);
         if ( rank == NULL ) goto read_as_zero;
         vgic_lock_rank(v, rank, flags);
-        *r = rank->v3.irouter[REG_RANK_INDEX(64,
-                              (gicd_reg - GICD_IROUTER), DABT_DOUBLE_WORD)];
+        *r = vgic_fetch_irouter(rank, gicd_reg - GICD_IROUTER);
         vgic_unlock_rank(v, rank, flags);
         return 1;
     case GICD_NSACR ... GICD_NSACRN:
@@ -812,8 +865,6 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info,
     struct hsr_dabt dabt = info->dabt;
     struct vgic_irq_rank *rank;
     unsigned long flags;
-    uint64_t new_irouter, old_irouter;
-    struct vcpu *old_vcpu, *new_vcpu;
     int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
 
     perfc_incr(vgicd_writes);
@@ -881,32 +932,8 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info,
         rank = vgic_rank_offset(v, 64, gicd_reg - GICD_IROUTER,
                                 DABT_DOUBLE_WORD);
         if ( rank == NULL ) goto write_ignore;
-        new_irouter = r;
         vgic_lock_rank(v, rank, flags);
-
-        old_irouter = rank->v3.irouter[REG_RANK_INDEX(64,
-                                       (gicd_reg - GICD_IROUTER),
-                                       DABT_DOUBLE_WORD)];
-        old_vcpu = vgic_v3_irouter_to_vcpu(v->domain, old_irouter);
-        new_vcpu = vgic_v3_irouter_to_vcpu(v->domain, new_irouter);
-
-        if ( !new_vcpu )
-        {
-            printk(XENLOG_G_DEBUG
-                   "%pv: vGICD: wrong irouter at offset %#08x val 
%#"PRIregister,
-                   v, gicd_reg, r);
-            vgic_unlock_rank(v, rank, flags);
-            /*
-             * TODO: Don't inject a fault to the guest when the MPIDR is
-             * not valid. From the spec, the interrupt should be
-             * ignored.
-             */
-            return 0;
-        }
-        rank->v3.irouter[REG_RANK_INDEX(64, (gicd_reg - GICD_IROUTER),
-                         DABT_DOUBLE_WORD)] = new_irouter;
-        if ( old_vcpu != new_vcpu )
-            vgic_migrate_irq(old_vcpu, new_vcpu, (gicd_reg - GICD_IROUTER)/8);
+        vgic_store_irouter(v->domain, rank, gicd_reg - GICD_IROUTER, r);
         vgic_unlock_rank(v, rank, flags);
         return 1;
     case GICD_NSACR ... GICD_NSACRN:
@@ -1036,7 +1063,6 @@ static const struct mmio_handler_ops 
vgic_distr_mmio_handler = {
 static int vgic_v3_vcpu_init(struct vcpu *v)
 {
     int i;
-    uint64_t affinity;
     paddr_t rdist_base;
     struct vgic_rdist_region *region;
     unsigned int last_cpu;
@@ -1045,15 +1071,6 @@ static int vgic_v3_vcpu_init(struct vcpu *v)
     struct domain *d = v->domain;
     uint32_t rdist_stride = d->arch.vgic.rdist_stride;
 
-    /* For SGI and PPI the target is always this CPU */
-    affinity = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 32 |
-                MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 16 |
-                MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 8  |
-                MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0));
-
-    for ( i = 0 ; i < 32 ; i++ )
-        v->arch.vgic.private_irqs->v3.irouter[i] = affinity;
-
     /*
      * Find the region where the re-distributor lives. For this purpose,
      * we look one region ahead as we have only the first CPU in hand.
@@ -1098,7 +1115,7 @@ static int vgic_v3_vcpu_init(struct vcpu *v)
 
 static int vgic_v3_domain_init(struct domain *d)
 {
-    int i, idx;
+    int i;
 
     /*
      * Domain 0 gets the hardware address.
@@ -1150,13 +1167,6 @@ static int vgic_v3_domain_init(struct domain *d)
         d->arch.vgic.rdist_regions[0].first_cpu = 0;
     }
 
-    /* By default deliver to CPU0 */
-    for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ )
-    {
-        for ( idx = 0; idx < 32; idx++ )
-            d->arch.vgic.shared_irqs[i].v3.irouter[idx] = 0;
-    }
-
     /* Register mmio handle for the Distributor */
     register_mmio_handler(d, &vgic_distr_mmio_handler, d->arch.vgic.dbase,
                           SZ_64K, NULL);
@@ -1182,7 +1192,6 @@ static int vgic_v3_domain_init(struct domain *d)
 static const struct vgic_ops v3_ops = {
     .vcpu_init   = vgic_v3_vcpu_init,
     .domain_init = vgic_v3_domain_init,
-    .get_target_vcpu  = vgic_v3_get_target_vcpu,
     .emulate_sysreg  = vgic_v3_emulate_sysreg,
     /*
      * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7bb4570..531ce5d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -68,11 +68,24 @@ static void vgic_init_pending_irq(struct pending_irq *p, 
unsigned int virq)
     p->irq = virq;
 }
 
-static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index)
+static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
+                           unsigned int vcpu)
 {
+    unsigned int i;
+
+    /*
+     * Make sure that the type chosen to store the target is able to
+     * store an VCPU ID between 0 and the maximum of virtual CPUs
+     * supported.
+     */
+    BUILD_BUG_ON((1 << (sizeof(rank->vcpu[0]) * 8)) < MAX_VIRT_CPUS);
+
     spin_lock_init(&rank->lock);
 
     rank->index = index;
+
+    for ( i = 0; i < NR_INTERRUPT_PER_RANK; i++ )
+        rank->vcpu[i] = vcpu;
 }
 
 int domain_vgic_init(struct domain *d, unsigned int nr_spis)
@@ -121,8 +134,9 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis)
     for (i=0; i<d->arch.vgic.nr_spis; i++)
         vgic_init_pending_irq(&d->arch.vgic.pending_irqs[i], i + 32);
 
+    /* SPIs are routed to VCPU0 by default */
     for ( i = 0; i < DOMAIN_NR_RANKS(d); i++ )
-        vgic_rank_init(&d->arch.vgic.shared_irqs[i], i + 1);
+        vgic_rank_init(&d->arch.vgic.shared_irqs[i], i + 1, 0);
 
     ret = d->arch.vgic.handler->domain_init(d);
     if ( ret )
@@ -176,7 +190,8 @@ int vcpu_vgic_init(struct vcpu *v)
     if ( v->arch.vgic.private_irqs == NULL )
       return -ENOMEM;
 
-    vgic_rank_init(v->arch.vgic.private_irqs, 0);
+    /* SGIs/PPIs are always routed to this VCPU */
+    vgic_rank_init(v->arch.vgic.private_irqs, 0, v->vcpu_id);
 
     v->domain->arch.vgic.handler->vcpu_init(v);
 
@@ -197,17 +212,27 @@ int vcpu_vgic_free(struct vcpu *v)
     return 0;
 }
 
+/* The function should be called by rank lock taken. */
+static struct vcpu *__vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
+{
+    struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
+
+    ASSERT(spin_is_locked(&rank->lock));
+
+    return v->domain->vcpu[rank->vcpu[virq & INTERRUPT_RANK_MASK]];
+}
+
 /* takes the rank lock */
-struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq)
+struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
 {
-    struct domain *d = v->domain;
     struct vcpu *v_target;
-    struct vgic_irq_rank *rank = vgic_rank_irq(v, irq);
+    struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
     unsigned long flags;
 
     vgic_lock_rank(v, rank, flags);
-    v_target = d->arch.vgic.handler->get_target_vcpu(v, irq);
+    v_target = __vgic_get_target_vcpu(v, virq);
     vgic_unlock_rank(v, rank, flags);
+
     return v_target;
 }
 
@@ -286,7 +311,6 @@ void arch_move_irqs(struct vcpu *v)
 
 void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 {
-    struct domain *d = v->domain;
     const unsigned long mask = r;
     struct pending_irq *p;
     unsigned int irq;
@@ -296,7 +320,7 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        v_target = d->arch.vgic.handler->get_target_vcpu(v, irq);
+        v_target = __vgic_get_target_vcpu(v, irq);
         p = irq_to_pending(v_target, irq);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         gic_remove_from_queues(v_target, irq);
@@ -312,7 +336,6 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 
 void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 {
-    struct domain *d = v->domain;
     const unsigned long mask = r;
     struct pending_irq *p;
     unsigned int irq;
@@ -322,7 +345,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        v_target = d->arch.vgic.handler->get_target_vcpu(v, irq);
+        v_target = __vgic_get_target_vcpu(v, irq);
         p = irq_to_pending(v_target, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index cb51a9e..a1ef429 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -104,14 +104,11 @@ struct vgic_irq_rank {
         uint32_t ipriorityr[8];
     };
 
-    union {
-        struct {
-            uint32_t itargets[8];
-        }v2;
-        struct {
-            uint64_t irouter[32];
-        }v3;
-    };
+    /*
+     * It's more convenient to store a target VCPU per vIRQ
+     * than the register ITARGETSR/IROUTER itself
+     */
+    uint8_t vcpu[32];
 };
 
 struct sgi_target {
@@ -130,9 +127,6 @@ struct vgic_ops {
     int (*vcpu_init)(struct vcpu *v);
     /* Domain specific initialization of vGIC */
     int (*domain_init)(struct domain *d);
-    /* Get the target vcpu for a given virq. The rank lock is already taken
-     * when calling this. */
-    struct vcpu *(*get_target_vcpu)(struct vcpu *v, unsigned int irq);
     /* vGIC sysreg emulation */
     int (*emulate_sysreg)(struct cpu_user_regs *regs, union hsr hsr);
     /* Maximum number of vCPU supported */
@@ -205,7 +199,7 @@ enum gic_sgi_mode;
 extern int domain_vgic_init(struct domain *d, unsigned int nr_spis);
 extern void domain_vgic_free(struct domain *d);
 extern int vcpu_vgic_init(struct vcpu *v);
-extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int irq);
+extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.