[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 4/8] xen/arm: Use the new mapping relations between vCPUID and vMPIDR

Hi Chen,

On 25/05/2015 04:34, Chen Baozi wrote:
On Sun, May 24, 2015 at 01:51:21PM +0100, Julien Grall wrote:
@@ -80,9 +72,7 @@ static struct vcpu *vgic_v3_get_target_vcpu(struct vcpu *v, 
unsigned int irq)


-    target = rank->v3.irouter[irq % 32];
-    target &= ~(GICD_IROUTER_SPI_MODE_ANY);
-    target &= MPIDR_AFF0_MASK;
+    target = vaffinity_to_vcpuid(rank->v3.irouter[irq % 32]);

When irouter.IRM  = 1 (i.e any processor can be used for SPIs), the affinity
may be unknown.

Although, when this register is saved we make sure to have AFF0 and AFF1 set
to 0.

This change, as the current wasn't clear about it. I would be tempt to add a
specific case for irouter.IRM = 1. But I don't mind if you only add a

If we add a specific case for iroute.IRM == 1, then which vcpu it will return?

You can choose any vCPU. Although, vCPU0 is the best one because it is always allocated and running.

Will it better to check this bit before calling this function?

No, this function is called in generic code to get the VCPU target for a given interrupt.

      ASSERT(target >= 0 && target < v->domain->max_vcpus);

      return v->domain->vcpu[target];
@@ -751,7 +741,7 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, 
mmio_info_t *info)
              vgic_unlock_rank(v, rank, flags);
              return 1;
-        vcpu_id = irouter;
+        vcpu_id = vaffinity_to_vcpuid(irouter);
          *r = vgic_v3_vcpu_to_irouter(v, vcpu_id);

The current code is very pointless, irouter contains the value to return.
vgic_v3_vcpu_to_irouter is just an identity function.

The read emulation for IROUTER can be simplify a lot to only returns the
value irouter which is already valid.

I can send a patch to apply before your series to clean up this IROUTER
code. I would make unnecessary some of your changes.

That will be fine.

I will give a try to send one by tomorrow.

          vgic_unlock_rank(v, rank, flags);
          return 1;
@@ -841,6 +831,7 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info)
      uint64_t new_irouter, new_target, old_target;
      struct vcpu *old_vcpu, *new_vcpu;
      int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
+    uint32_t vcpu_id;


@@ -925,8 +916,9 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info)
-            new_target = new_irouter & MPIDR_AFF0_MASK;
-            if ( new_target >= v->domain->max_vcpus )
+            new_target = new_irouter & MPIDR_HWID_MASK;
+            vcpu_id = vaffinity_to_vcpuid(new_irouter);
+            if ( vcpu_id >= v->domain->max_vcpus )
                         "%pv: vGICD: wrong irouter at offset %#08x\n val 0x%lx vcpu 
@@ -934,7 +926,7 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info)
                  vgic_unlock_rank(v, rank, flags);
                  return 0;
-            new_vcpu = vgic_v3_irouter_to_vcpu(v, new_irouter);

I would prefer to keep vgic_v3_irouter_to_vcpu and return NULL if the VCPU
ID is too high.

The emulation code would be:

      new_vcpu = vgic_v3_irouter_to_vcpu(v, new_irouter);
      if ( !new_vcpu )

Although the current emulation is wrong, if the guest is passing a wrong
MPIDR, we should just ignore the setting and let the interrupt going
pending. Anyway, I think it would require more work in Xen so I'm okay with
the current behavior.

+            new_vcpu = v->domain->vcpu[vcpu_id];

          rank->v3.irouter[REG_RANK_INDEX(64, (gicd_reg - GICD_IROUTER),
diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c
index 5d899be..1c1e7de 100644
--- a/xen/arch/arm/vpsci.c
+++ b/xen/arch/arm/vpsci.c
@@ -33,7 +33,7 @@ static int do_common_cpu_on(register_t target_cpu, register_t 
      register_t vcpuid;

      if( ver == XEN_PSCI_V_0_2 )
-        vcpuid = (target_cpu & MPIDR_HWID_MASK);
+        vcpuid = vaffinity_to_vcpuid(target_cpu);
          vcpuid = target_cpu;

AFAICT in PSCI 0.1, target_cpu is a CPUID which is a MPIDR-like value. If
so, I think we may need to call vaffinity_to_vcpuid.

But, I wasn't able to confirm with the spec. I guessed it from the Linux
usage. Maybe there is limit of number of CPU used with PSCI 0.1?

I didn't read the spec, just change the code according the current

I wasn't able to find any difference between MPDIR vs CPUID in the spec.

(CC Parth the author of the code).


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.