[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 39/57] ARM: new VGIC: Add ACTIVE registers handlers
The active register handlers are shared between the v2 and v3 emulation, so their implementation goes into vgic-mmio.c, to be easily referenced from the v3 emulation as well later. Since activation/deactivation of an interrupt may happen entirely in the guest without it ever exiting, we need some extra logic to properly track the active state. For clearing the active state, we would basically have to halt the guest to make sure this is properly propagated into the respective VCPUs. This is not yet implemented in Xen. Fortunately this feature is mostly used to reset a just in initialised GIC, so chances are we are tasked to clear bits that are already zero. Add some simple check to avoid a pointless warning in this case. Signed-off-by: Andre Przywara <andre.przywara@xxxxxxxxxx> --- Changelog RFC ... v1: - remove premature "proper ACTIVE" handler stub - avoid unnecessary warnings on NO-OP register writes - extend comments xen/arch/arm/vgic/vgic-mmio-v2.c | 4 +- xen/arch/arm/vgic/vgic-mmio.c | 103 +++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic-mmio.h | 11 +++++ 3 files changed, 116 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c index efdd73301d..c93455fbb2 100644 --- a/xen/arch/arm/vgic/vgic-mmio-v2.c +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c @@ -92,10 +92,10 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { vgic_mmio_read_pending, vgic_mmio_write_cpending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISACTIVER, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_active, vgic_mmio_write_sactive, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ICACTIVER, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_active, vgic_mmio_write_cactive, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_IPRIORITYR, vgic_mmio_read_raz, vgic_mmio_write_wi, 8, diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c index 2e939d5e39..c44d67082f 100644 --- a/xen/arch/arm/vgic/vgic-mmio.c +++ b/xen/arch/arm/vgic/vgic-mmio.c @@ -281,6 +281,109 @@ void vgic_mmio_write_cpending(struct vcpu *vcpu, } } +/* + * The actual active bit for a virtual IRQ is held in the LR. Our shadow + * copy in struct vgic_irq is only synced when needed and may not be + * up-to-date all of the time. + * Returning the actual active state is quite costly (stopping all + * VCPUs processing any affected vIRQs), so we use a simple implementation + * to get the best possible answer. + */ +unsigned long vgic_mmio_read_active(struct vcpu *vcpu, + paddr_t addr, unsigned int len) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + uint32_t value = 0; + unsigned int i; + + /* Loop over all IRQs affected by this read */ + for ( i = 0; i < len * 8; i++ ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + if ( irq->active ) + value |= (1U << i); + + vgic_put_irq(vcpu->domain, irq); + } + + return value; +} + +/* + * We don't actually support clearing the active state of an IRQ (yet). + * However there is a chance that most guests use this for initialization. + * We check whether this MMIO access would actually affect any active IRQ, + * and only print our warning in this case. So clearing already non-active + * IRQs would not be moaned about in the logs. + */ +void vgic_mmio_write_cactive(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + bool bail_out = false; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + /* + * If we know that the IRQ is active or we can't be sure about + * it (because it is currently in a CPU), log the not properly + * emulated MMIO access. + */ + if ( irq->active || irq->vcpu ) + { + gdprintk(XENLOG_ERR, + "%pv: vGICD: IRQ%d: clearing active state not supported\n", + vcpu, irq->intid); + bail_out = true; + } + + vgic_put_irq(vcpu->domain, irq); + if ( bail_out ) + return; + } +} + +/* + * We don't actually support setting the active state of an IRQ (yet). + * We check whether this MMIO access would actually affect any non-active IRQ, + * and only print our warning in this case. + */ +void vgic_mmio_write_sactive(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + bool bail_out = false; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + /* + * If we know that the IRQ is not active or we can't be sure about + * it (because it is currently in a CPU), log the not properly + * emulated MMIO access. + */ + if ( !irq->active || irq->vcpu ) + { + gdprintk(XENLOG_ERR, + "%pv: vGICD: IRQ%d: setting active state not supported\n", + vcpu, irq->intid); + bail_out = true; + } + + vgic_put_irq(vcpu->domain, irq); + if ( bail_out ) + return; + } +} + static int match_region(const void *key, const void *elt) { const unsigned int offset = (unsigned long)key; diff --git a/xen/arch/arm/vgic/vgic-mmio.h b/xen/arch/arm/vgic/vgic-mmio.h index 4465f3b7e5..8604720628 100644 --- a/xen/arch/arm/vgic/vgic-mmio.h +++ b/xen/arch/arm/vgic/vgic-mmio.h @@ -118,6 +118,17 @@ void vgic_mmio_write_cpending(struct vcpu *vcpu, paddr_t addr, unsigned int len, unsigned long val); +unsigned long vgic_mmio_read_active(struct vcpu *vcpu, + paddr_t addr, unsigned int len); + +void vgic_mmio_write_cactive(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + +void vgic_mmio_write_sactive(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev); #endif -- 2.14.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |