|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V5 05/10] xen/arm64: gicv3: Use AFF1 when translating ICC_SGI1R_EL1 to cpumask
On Sat, May 30, 2015 at 07:07:26PM +0800, Chen Baozi wrote:
> From: Chen Baozi <baozich@xxxxxxxxx>
>
> To support more than 16 vCPUs, we have to calculate cpumask with AFF1
> field value in ICC_SGI1R_EL1.
>
> Signed-off-by: Chen Baozi <baozich@xxxxxxxxx>
> ---
> xen/arch/arm/vgic-v3.c | 9 ++++++++-
> xen/include/asm-arm/gic_v3_defs.h | 3 +++
> 2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index a283c8c..21d8d3f 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -976,10 +976,17 @@ static inline void gicv3_sgir_to_cpumask(cpumask_t
> *cpumask,
> const register_t sgir)
> {
> unsigned long target_list;
> + int aff1;
>
> target_list = sgir & ICH_SGI_TARGETLIST_MASK;
> - bitmap_copy(cpumask_bits(cpumask), &target_list, ICH_SGI_TARGET_BITS);
> + /* We assume that only AFF1 is used in ICC_SGI1R_EL1. */
> + aff1 = (sgir >> ICH_SGI_AFFINITY_LEVEL(1)) & ICH_SGI_AFFx_MASK;
>
> + BUILD_BUG_ON(sizeof(cpumask_t)*8 < MAX_VIRT_CPUS);
> + BUG_ON(((aff1+1) * ICH_SGI_TARGET_BITS) > NR_CPUS);
> +
> + memcpy((uint16_t *)cpumask + aff1, &target_list,
> + (ICH_SGI_TARGET_BITS/8));
> }
>
> static int vgic_v3_to_sgi(struct vcpu *v, register_t sgir)
> diff --git a/xen/include/asm-arm/gic_v3_defs.h
> b/xen/include/asm-arm/gic_v3_defs.h
> index e106e67..3743e66 100644
> --- a/xen/include/asm-arm/gic_v3_defs.h
> +++ b/xen/include/asm-arm/gic_v3_defs.h
> @@ -153,6 +153,9 @@
> #define ICH_SGI_IRQ_MASK 0xf
> #define ICH_SGI_TARGETLIST_MASK 0xffff
> #define ICH_SGI_TARGET_BITS 16
> +#define ICH_SGI_AFFx_MASK 0xff
> +#define ICH_SGI_AFFINITY_LEVEL(x) (16 * (x))
> +
^^ Sorry for the unnecessary trailing line...
Baozi.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |