|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2 4/8] xen/arm: Use the new mapping relations between vCPUID and vMPIDR
On Sun, May 24, 2015 at 01:51:21PM +0100, Julien Grall wrote:
> Hi Chen,
>
> On 23/05/2015 14:52, Chen Baozi wrote:
> >From: Chen Baozi <baozich@xxxxxxxxx>
> >
> >There are 3 places to change:
> >
> >* Initialise vMPIDR value in vcpu_initialise()
> >* Find the vCPU from vMPIDR affinity information when accessing GICD
> > registers in vGIC
> >* Find the vCPU from vMPIRR affinity information when booting with vPSCI
>
> s/VMPIRR/vMPIDR/
>
> > in vGIC
> >
> >Signed-off-by: Chen Baozi <baozich@xxxxxxxxx>
> >---
> > xen/arch/arm/domain.c | 6 +-----
> > xen/arch/arm/vgic-v3.c | 22 +++++++---------------
> > xen/arch/arm/vpsci.c | 2 +-
> > 3 files changed, 9 insertions(+), 21 deletions(-)
> >
> >diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> >index 2bde26e..0cf147c 100644
> >--- a/xen/arch/arm/domain.c
> >+++ b/xen/arch/arm/domain.c
> >@@ -501,11 +501,7 @@ int vcpu_initialise(struct vcpu *v)
> >
> > v->arch.sctlr = SCTLR_GUEST_INIT;
> >
> >- /*
> >- * By default exposes an SMP system with AFF0 set to the VCPU ID
> >- * TODO: Handle multi-threading processor and cluster
> >- */
> >- v->arch.vmpidr = MPIDR_SMP | (v->vcpu_id << MPIDR_AFF0_SHIFT);
> >+ v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
> >
> > v->arch.actlr = READ_SYSREG32(ACTLR_EL1);
> >
> >diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> >index 40e1892..12007d8 100644
> >--- a/xen/arch/arm/vgic-v3.c
> >+++ b/xen/arch/arm/vgic-v3.c
> >@@ -50,14 +50,6 @@
> > */
> > #define VGICD_CTLR_DEFAULT (GICD_CTLR_ARE_NS)
> >
> >-static struct vcpu *vgic_v3_irouter_to_vcpu(struct vcpu *v, uint64_t
> >irouter)
> >-{
> >- irouter &= ~(GICD_IROUTER_SPI_MODE_ANY);
> >- irouter = irouter & MPIDR_AFF0_MASK;
> >-
> >- return v->domain->vcpu[irouter];
> >-}
> >-
> > static uint64_t vgic_v3_vcpu_to_irouter(struct vcpu *v,
> > unsigned int vcpu_id)
> > {
> >@@ -80,9 +72,7 @@ static struct vcpu *vgic_v3_get_target_vcpu(struct vcpu
> >*v, unsigned int irq)
> >
> > ASSERT(spin_is_locked(&rank->lock));
> >
> >- target = rank->v3.irouter[irq % 32];
> >- target &= ~(GICD_IROUTER_SPI_MODE_ANY);
> >- target &= MPIDR_AFF0_MASK;
> >+ target = vaffinity_to_vcpuid(rank->v3.irouter[irq % 32]);
>
> When irouter.IRM = 1 (i.e any processor can be used for SPIs), the affinity
> may be unknown.
>
> Although, when this register is saved we make sure to have AFF0 and AFF1 set
> to 0.
>
> This change, as the current wasn't clear about it. I would be tempt to add a
> specific case for irouter.IRM = 1. But I don't mind if you only add a
> comment.
If we add a specific case for iroute.IRM == 1, then which vcpu it will return?
Will it better to check this bit before calling this function?
>
> > ASSERT(target >= 0 && target < v->domain->max_vcpus);
> >
> > return v->domain->vcpu[target];
> >@@ -751,7 +741,7 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v,
> >mmio_info_t *info)
> > vgic_unlock_rank(v, rank, flags);
> > return 1;
> > }
> >- vcpu_id = irouter;
> >+ vcpu_id = vaffinity_to_vcpuid(irouter);
> > *r = vgic_v3_vcpu_to_irouter(v, vcpu_id);
>
> The current code is very pointless, irouter contains the value to return.
> vgic_v3_vcpu_to_irouter is just an identity function.
>
> The read emulation for IROUTER can be simplify a lot to only returns the
> value irouter which is already valid.
>
> I can send a patch to apply before your series to clean up this IROUTER
> code. I would make unnecessary some of your changes.
That will be fine.
>
> > vgic_unlock_rank(v, rank, flags);
> > return 1;
> >@@ -841,6 +831,7 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v,
> >mmio_info_t *info)
> > uint64_t new_irouter, new_target, old_target;
> > struct vcpu *old_vcpu, *new_vcpu;
> > int gicd_reg = (int)(info->gpa - v->domain->arch.vgic.dbase);
> >+ uint32_t vcpu_id;
> >
> > perfc_incr(vgicd_writes);
> >
> >@@ -925,8 +916,9 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v,
> >mmio_info_t *info)
> > }
> > else
> > {
> >- new_target = new_irouter & MPIDR_AFF0_MASK;
> >- if ( new_target >= v->domain->max_vcpus )
> >+ new_target = new_irouter & MPIDR_HWID_MASK;
> >+ vcpu_id = vaffinity_to_vcpuid(new_irouter);
> >+ if ( vcpu_id >= v->domain->max_vcpus )
> > {
> > printk(XENLOG_G_DEBUG
> > "%pv: vGICD: wrong irouter at offset %#08x\n val
> > 0x%lx vcpu %x",
> >@@ -934,7 +926,7 @@ static int vgic_v3_distr_mmio_write(struct vcpu *v,
> >mmio_info_t *info)
> > vgic_unlock_rank(v, rank, flags);
> > return 0;
> > }
> >- new_vcpu = vgic_v3_irouter_to_vcpu(v, new_irouter);
>
> I would prefer to keep vgic_v3_irouter_to_vcpu and return NULL if the VCPU
> ID is too high.
>
> The emulation code would be:
>
> new_vcpu = vgic_v3_irouter_to_vcpu(v, new_irouter);
> if ( !new_vcpu )
> {
> printk(.....);
> }
>
> Although the current emulation is wrong, if the guest is passing a wrong
> MPIDR, we should just ignore the setting and let the interrupt going
> pending. Anyway, I think it would require more work in Xen so I'm okay with
> the current behavior.
>
> >+ new_vcpu = v->domain->vcpu[vcpu_id];
> > }
> >
> > rank->v3.irouter[REG_RANK_INDEX(64, (gicd_reg - GICD_IROUTER),
> >diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c
> >index 5d899be..1c1e7de 100644
> >--- a/xen/arch/arm/vpsci.c
> >+++ b/xen/arch/arm/vpsci.c
> >@@ -33,7 +33,7 @@ static int do_common_cpu_on(register_t target_cpu,
> >register_t entry_point,
> > register_t vcpuid;
> >
> > if( ver == XEN_PSCI_V_0_2 )
> >- vcpuid = (target_cpu & MPIDR_HWID_MASK);
> >+ vcpuid = vaffinity_to_vcpuid(target_cpu);
> > else
> > vcpuid = target_cpu;
>
> AFAICT in PSCI 0.1, target_cpu is a CPUID which is a MPIDR-like value. If
> so, I think we may need to call vaffinity_to_vcpuid.
>
> But, I wasn't able to confirm with the spec. I guessed it from the Linux
> usage. Maybe there is limit of number of CPU used with PSCI 0.1?
I didn't read the spec, just change the code according the current
'& MPIDR_HWID_MASK' code.
Cheers,
Baozi.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |