[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 06/19] arm/gicv4-its: Add VLPI map/unmap operations


  • To: Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>
  • From: Mykola Kvach <xakep.amatop@xxxxxxxxx>
  • Date: Wed, 11 Feb 2026 10:31:46 +0200
  • Arc-authentication-results: i=1; mx.google.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=gziD7I6zNZ1668QO63Xo1tzH9/Hqg49593Cc+O40vWA=; fh=S1TgsMYO/ju21WOcmYTQxMlAU8ImCjuw50drwKkfBt8=; b=jyVqi1lBTgrKE1kpMniehQUJugI9gZ2RNZExrIBRHouf4JmaXuVdgofF46tVNk95oy HlJ/cLGjBmWNMQ85mTrpZ/SPdLftmBV1+MSP/Y0gCVo4u+qiZSxT+ip/FiqOifFcC2eY /6uhPFSXwqYk5vkr1RTjQDPsKV5km2fFe7N9aS2vHc07ifrSp4/dXzQkKQYceK+jT7sg GRIoMEYpDB3FkjUhfJjCRb+jcEWqfnVo6C5qzRl8xZqwMKJu2e1wcX0ItOUBbnew+U3I +OI0ti8Pript4f1Efny5PyMlJJmmV04uNyvc9Y17kmunMQpoDqr3uLWnQ8QAKWFpLZPF tmhw==; darn=lists.xenproject.org
  • Arc-seal: i=1; a=rsa-sha256; t=1770798719; cv=none; d=google.com; s=arc-20240605; b=X2/aC4lHH5Do69NUHJlabSOOs25+gbzMKae3B9M1ERwHPpTwXB2k1N9g74eNQXo9ll biMoa/ET5e/Isbp2StJ5BLC2lY2TOZKHPTXvaczpZQzYncUsgRWdzVT5CDs2eGD67WyG t9g1MsEShdlzL12EfIJUKLfG5cNcxE9eduTH4vbuvTlL+5LltyXaKGd3zvywyiOpj4t6 pLy39DSWTBXlMS+AFPqXEdy8XNmfUWELdZ68Wu6BeyF8us6kAZbWhIErMlX7804m39PO bQ5OM4dDszIGTJ5DdDxF/4dFOlAoObtXE/Ueoa5nAcv0rX3ph7fwA/raRzu7Pl7N0BS5 FYTw==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Wed, 11 Feb 2026 08:32:04 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Feb 2, 2026 at 6:14 PM Mykyta Poturai <Mykyta_Poturai@xxxxxxxx> wrote:
>
> For VLPI to be injected into a guest, it needs to be mapped or moved to
> a corresponding VPE first. Add a struct to handle the info about the
> VLPI mapping and a flag indicating whether the IRQ is tied to a HW one.
>
> Implement mapping/unmapping of VLPIs to VPEs, also handle moving. Tie
> them to emulated MAPTI/MOVI/DISCARD commands.
>
> Add GIC_IRQ_GUEST_FORWARDED IRQ status flag to keep track of which LPIs
> are mapped to virtual ones.
>
> Signed-off-by: Mykyta Poturai <mykyta_poturai@xxxxxxxx>
> ---
>  xen/arch/arm/gic-v3-its.c             |  14 ++
>  xen/arch/arm/gic-v4-its.c             | 292 ++++++++++++++++++++++++++
>  xen/arch/arm/include/asm/gic_v3_its.h |  20 ++
>  xen/arch/arm/include/asm/gic_v4_its.h |  20 ++
>  xen/arch/arm/include/asm/vgic.h       |   5 +
>  xen/arch/arm/vgic-v3-its.c            |  42 +++-
>  6 files changed, 387 insertions(+), 6 deletions(-)
>  create mode 100644 xen/arch/arm/gic-v4-its.c
>
> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
> index 25c07eb861..25889445f5 100644
> --- a/xen/arch/arm/gic-v3-its.c
> +++ b/xen/arch/arm/gic-v3-its.c
> @@ -315,6 +315,20 @@ int its_send_cmd_inv(struct host_its *its,
>      return its_send_command(its, cmd);
>  }
>
> +int its_send_cmd_discard(struct host_its *its, struct its_device *dev,
> +                         uint32_t eventid)
> +{
> +    uint64_t cmd[4];
> +    uint32_t deviceid = dev->host_devid;
> +
> +    cmd[0] = GITS_CMD_DISCARD | ((uint64_t)deviceid << 32);
> +    cmd[1] = (uint64_t)eventid;
> +    cmd[2] = 0x00;
> +    cmd[3] = 0x00;
> +
> +    return its_send_command(its, cmd);
> +}
> +
>  /* Set up the (1:1) collection mapping for the given host CPU. */
>  int gicv3_its_setup_collection(unsigned int cpu)
>  {
> diff --git a/xen/arch/arm/gic-v4-its.c b/xen/arch/arm/gic-v4-its.c
> new file mode 100644
> index 0000000000..9bbd0d96b7
> --- /dev/null
> +++ b/xen/arch/arm/gic-v4-its.c
> @@ -0,0 +1,292 @@
> +/*
> + * xen/arch/arm/gic-v4-its.c
> + *
> + * ARM Generic Interrupt Controller support v4 version
> + * based on xen/arch/arm/gic-v3-its.c and kernel GICv4 driver
> + *
> + * Copyright (C) 2023 - ARM Ltd
> + * Penny Zheng <penny.zheng@xxxxxxx>, ARM Ltd ported to Xen
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <xen/errno.h>
> +#include <xen/sched.h>
> +#include <xen/spinlock.h>
> +#include <asm/gic_v3_defs.h>
> +#include <asm/gic_v3_its.h>
> +#include <asm/gic_v4_its.h>
> +#include <asm/vgic.h>
> +
> +
> +static int its_send_cmd_vsync(struct host_its *its, uint16_t vpeid)
> +{
> +    uint64_t cmd[4];
> +
> +    cmd[0] = GITS_CMD_VSYNC;
> +    cmd[1] = (uint64_t)vpeid << 32;
> +    cmd[2] = 0x00;
> +    cmd[3] = 0x00;
> +
> +    return its_send_command(its, cmd);
> +}
> +
> +static int its_send_cmd_vmapti(struct host_its *its, struct its_device *dev,
> +                               uint32_t eventid)
> +{
> +    uint64_t cmd[4];
> +    uint32_t deviceid = dev->host_devid;
> +    struct its_vlpi_map *map = &dev->event_map.vlpi_maps[eventid];
> +    uint16_t vpeid = map->vm->vpes[map->vpe_idx]->vpe_id;
> +    uint32_t vintid = map->vintid;
> +    uint32_t db_pintid;
> +
> +    if ( map->db_enabled )
> +        db_pintid = map->vm->vpes[map->vpe_idx]->vpe_db_lpi;
> +    else
> +        db_pintid = INVALID_LPI;

If we want to disable the doorbell, the VMAPTI encoding should use
Dbell_pINTID = 1023.

Arm IHI 0069H.b, section 5.3.20 VMAPTI states:
- If Dbell_pINTID is 1023, then no physical interrupt is generated.
- It is an error if Dbell_pINTID is not a valid doorbell INTID, where
a valid INTID is either:
  * 1023, or
  * within the supported range for LPIs.

So using 1023 is the architected way to represent “no doorbell”.

> +
> +    cmd[0] = GITS_CMD_VMAPTI | ((uint64_t)deviceid << 32);
> +    cmd[1] = eventid | ((uint64_t)vpeid << 32);
> +    cmd[2] = vintid | ((uint64_t)db_pintid << 32);
> +    cmd[3] = 0x00;
> +
> +    return its_send_command(its, cmd);
> +}
> +
> +static bool pirq_is_forwarded_to_vcpu(struct pending_irq *pirq)
> +{
> +    ASSERT(pirq);
> +    return test_bit(GIC_IRQ_GUEST_FORWARDED, &pirq->status);
> +}
> +
> +bool event_is_forwarded_to_vcpu(struct its_device *dev, uint32_t eventid)
> +{
> +    struct pending_irq *pirq;
> +
> +    /* No vlpi maps at all ? */
> +    if ( !dev->event_map.vlpi_maps)
> +        return false;
> +
> +    pirq = dev->event_map.vlpi_maps[eventid].pirq;
> +    return pirq_is_forwarded_to_vcpu(pirq);
> +}
> +
> +static int its_send_cmd_vmovi(struct host_its *its, struct its_vlpi_map *map)
> +{
> +    uint64_t cmd[4];
> +    struct its_device *dev = map->dev;
> +    uint32_t eventid = map->eventid;
> +    uint32_t deviceid = dev->host_devid;
> +    uint16_t vpeid = map->vm->vpes[map->vpe_idx]->vpe_id;
> +    uint32_t db_pintid;
> +
> +    if ( map->db_enabled )
> +        db_pintid = map->vm->vpes[map->vpe_idx]->vpe_db_lpi;
> +    else
> +        db_pintid = INVALID_IRQ;

Same point as above: if the intention is “no doorbell”, the architected
encoding is Dbell_pINTID = 1023.

Per Arm IHI 0069H.b, section 5.3.21 VMOVI, an error is generated only when
D == 1 and Dbell_pINTID is not a valid doorbell INTID (valid values are
1023 or an INTID within the supported LPI range). When D == 0, Dbell_pINTID
is ignored.

> +
> +    cmd[0] = GITS_CMD_VMOVI | ((uint64_t)deviceid << 32);
> +    cmd[1] = eventid | ((uint64_t)vpeid << 32);
> +    cmd[2] = (map->db_enabled ? 1UL : 0UL) | ((uint64_t)db_pintid << 32);
> +    cmd[3] = 0x00;
> +
> +    return its_send_command(its, cmd);
> +}
> +
> +static int gicv4_its_vlpi_map(struct its_vlpi_map *map)
> +{
> +    struct its_device *dev;
> +    struct host_its *its;
> +    uint32_t eventid;
> +    int ret;
> +
> +    if ( !map )
> +        return -EINVAL;
> +    dev = map->dev;
> +    its = map->dev->hw_its;
> +    eventid = map->eventid;
> +
> +    spin_lock(&dev->event_map.vlpi_lock);
> +
> +    if ( !dev->event_map.vm )
> +    {
> +        struct its_vlpi_map *maps;
> +
> +        maps = xzalloc_array(struct its_vlpi_map, dev->event_map.nr_lpis);
> +        if ( !maps )
> +        {
> +            ret = -ENOMEM;
> +            goto err;

nit: goto out;

> +        }
> +
> +        dev->event_map.vm = map->vm;
> +        dev->event_map.vlpi_maps = maps;
> +    }
> +    else if ( dev->event_map.vm != map->vm )
> +    {
> +        ret = -EINVAL;
> +        goto err;
> +    }
> +
> +    /* Get our private copy of the mapping information */
> +    dev->event_map.vlpi_maps[eventid] = *map;
> +
> +    if ( pirq_is_forwarded_to_vcpu(map->pirq) )
> +    {
> +        struct its_vlpi_map *old = &dev->event_map.vlpi_maps[eventid];
> +        uint32_t old_vpeid = old->vm->vpes[old->vpe_idx]->vpe_id;

Nit/bug?: old_vpeid is read after vlpi_maps[eventid] has been overwritten
with *map, so it’s not the old mapping anymore.
Should old_vpeid be captured before the assignment?

> +
> +        /* Already mapped, move it around */
> +        ret = its_send_cmd_vmovi(dev->hw_its, map);
> +        if ( ret )
> +            goto err;
> +
> +        /*
> +         * ARM spec says that If, after using VMOVI to move an interrupt from
> +         * vPE A to vPE B, software moves the same interrupt again, a VSYNC
> +         * command must be issued to vPE A between the moves to ensure 
> correct
> +         * behavior.
> +         * So each time we issue VMOVI, we VSYNC the old VPE for good 
> measure.
> +         */
> +        ret = its_send_cmd_vsync(dev->hw_its, old_vpeid);
> +    }
> +    else
> +    {
> +        /* Drop the original physical mapping firstly */
> +        ret = its_send_cmd_discard(its, dev, eventid);
> +        if ( ret )
> +            goto err;
> +
> +        /* Then install the virtual one */
> +        ret = its_send_cmd_vmapti(its, dev, eventid);
> +        if ( ret )
> +            goto err;
> +
> +        /* Increment the number of VLPIs */
> +        dev->event_map.nr_vlpis++;
> +    }
> +
> +    goto out;
> +
> + err:
> +    xfree(dev->event_map.vlpi_maps);

1. Bug?: unconditionally frees dev->event_map.vlpi_maps, but it’s only
newly allocated on the !dev->event_map.vm path. If called with an existing
vm/maps, this can free live state and cause UAF.
2. Prefer XFREE(dev->event_map.vlpi_maps); it frees and NULLs the pointer.

> + out:
> +    spin_unlock(&dev->event_map.vlpi_lock);
> +    return ret;
> +}

nit: add new line between functions

> +int gicv4_its_vlpi_unmap(struct pending_irq *pirq)
> +{
> +    struct its_vlpi_map *map = pirq->vlpi_map;
> +    struct its_device *dev = map->dev;
> +    int ret;
> +    uint32_t host_lpi;
> +
> +    spin_lock(&dev->event_map.vlpi_lock);
> +
> +    if ( !dev->event_map.vm || !pirq_is_tied_to_hw(pirq) )
> +    {
> +        ret = -EINVAL;
> +        goto out;
> +    }
> +
> +    /* Drop the virtual mapping */
> +    ret = its_send_cmd_discard(dev->hw_its, dev, map->eventid);
> +    if ( ret )
> +        goto out;
> +
> +    /* Restore the physical one */
> +    clear_bit(GIC_IRQ_GUEST_FORWARDED, &pirq->status);
> +    host_lpi = dev->host_lpi_blocks[map->eventid / LPI_BLOCK] +
> +               (map->eventid % LPI_BLOCK);
> +    /* Map every host LPI to host CPU 0 */
> +    ret = its_send_cmd_mapti(dev->hw_its, dev->host_devid, map->eventid,
> +                             host_lpi, 0);
> +    if ( ret )
> +        goto out;
> +
> +    lpi_write_config(lpi_data.lpi_property, host_lpi, 0xff, 
> LPI_PROP_ENABLED);

Are we intentionally resetting host LPI priority here (same as allocation path)?
If yes, worth documenting.

> +
> +    ret = its_inv_lpi(dev->hw_its, dev, map->eventid, 0);
> +    if ( ret )
> +        goto out;
> +
> +    xfree(map);
> +    /*
> +     * Drop the refcount and make the device available again if
> +     * this was the last VLPI.
> +     */
> +    if ( !--dev->event_map.nr_vlpis )
> +    {
> +        dev->event_map.vm = NULL;
> +        xfree(dev->event_map.vlpi_maps);
> +    }
> +
> +out:
> +    spin_unlock(&dev->event_map.vlpi_lock);
> +    return ret;
> +}
> +
> +int gicv4_assign_guest_event(struct domain *d, paddr_t vdoorbell_address,
> +                             uint32_t vdevid, uint32_t eventid,
> +                             struct pending_irq *pirq)
> +
> +{
> +    int ret = ENODEV;
> +    struct its_vm *vm = d->arch.vgic.its_vm;
> +    struct its_vlpi_map *map;
> +    struct its_device *dev;
> +
> +    spin_lock(&d->arch.vgic.its_devices_lock);
> +    dev = get_its_device(d, vdoorbell_address, vdevid);
> +    if ( dev && eventid < dev->eventids )
> +    {
> +        /* Prepare the vlpi mapping info */
> +        map = xzalloc(struct its_vlpi_map);
> +        if ( !map )
> +            goto out;
> +        map->vm = vm;
> +        map->vintid = pirq->irq;
> +        map->db_enabled = true;
> +        map->vpe_idx = pirq->lpi_vcpu_id;
> +        map->properties = pirq->lpi_priority |
> +                          (test_bit(GIC_IRQ_GUEST_ENABLED, &pirq->status) ?
> +                          LPI_PROP_ENABLED : 0);
> +        map->pirq = pirq;
> +        map->dev = dev;
> +        map->eventid = eventid;
> +
> +        ret = gicv4_its_vlpi_map(map);
> +        if ( ret )
> +        {
> +            xfree(map);
> +            goto out;
> +        }
> +
> +        pirq->vlpi_map = map;
> +    }
> +
> + out:
> +    spin_unlock(&d->arch.vgic.its_devices_lock);
> +    return ret;
> +}
> +
> +int gicv4_its_vlpi_move(struct pending_irq *pirq, struct vcpu *vcpu)
> +{
> +    struct its_vlpi_map *map = pirq->vlpi_map;
> +    struct its_device *dev = map->dev;

map is dereferenced before it’s validated

> +
> +    if ( !dev->event_map.vm || !map )
> +        return -EINVAL;
> +
> +    map->vpe_idx = vcpu->vcpu_id;
> +    return gicv4_its_vlpi_map(map);
> +}
> diff --git a/xen/arch/arm/include/asm/gic_v3_its.h 
> b/xen/arch/arm/include/asm/gic_v3_its.h
> index 9f0ea9ccb1..75c91c0426 100644
> --- a/xen/arch/arm/include/asm/gic_v3_its.h
> +++ b/xen/arch/arm/include/asm/gic_v3_its.h
> @@ -116,6 +116,9 @@
>  /* We allocate LPIs on the hosts in chunks of 32 to reduce handling 
> overhead. */
>  #define LPI_BLOCK                       32U
>
> +#ifdef CONFIG_GICV4
> +#include <asm/gic_v4_its.h>
> +#endif
>  /*
>   * Describes a device which is using the ITS and is used by a guest.
>   * Since device IDs are per ITS (in contrast to vLPIs, which are per
> @@ -135,6 +138,9 @@ struct its_device {
>      uint32_t eventids;                  /* Number of event IDs (MSIs) */
>      uint32_t *host_lpi_blocks;          /* Which LPIs are used on the host */
>      struct pending_irq *pend_irqs;      /* One struct per event */
> +#ifdef CONFIG_GICV4
> +    struct event_vlpi_map event_map;
> +#endif
>  };
>
>  /* data structure for each hardware ITS */
> @@ -184,6 +190,8 @@ extern struct __lpi_data lpi_data;
>
>  extern struct list_head host_its_list;
>
> +int its_send_cmd_discard(struct host_its *its, struct its_device *dev,
> +                         uint32_t eventid);
>  int its_send_cmd_inv(struct host_its *its, uint32_t deviceid, uint32_t 
> eventid);
>  int its_send_cmd_clear(struct host_its *its, uint32_t deviceid, uint32_t 
> eventid);
>  int its_send_cmd_mapti(struct host_its *its, uint32_t deviceid,
> @@ -254,6 +262,18 @@ int its_send_command(struct host_its *hw_its, const void 
> *its_cmd);
>
>  struct its_device *get_its_device(struct domain *d, paddr_t vdoorbell,
>                                    uint32_t vdevid);
> +/* GICv4 functions */
> +int gicv4_assign_guest_event(struct domain *d, paddr_t vdoorbell_address,
> +                             uint32_t vdevid, uint32_t eventid,
> +                             struct pending_irq *pirq);
> +int gicv4_its_vlpi_move(struct pending_irq *pirq, struct vcpu *vcpu);
> +#ifndef CONFIG_GICV4
> +#define event_is_forwarded_to_vcpu(dev, eventid) ((void)dev, (void)eventid, 
> false)
> +#else
> +bool event_is_forwarded_to_vcpu(struct its_device *dev, uint32_t eventid);
> +void its_vpe_mask_db(struct its_vpe *vpe);
> +#endif
> +int gicv4_its_vlpi_unmap(struct pending_irq *pirq);
>
>  /* ITS quirks handling. */
>  uint64_t gicv3_its_get_cacheability(void);
> diff --git a/xen/arch/arm/include/asm/gic_v4_its.h 
> b/xen/arch/arm/include/asm/gic_v4_its.h
> index f48eae60ad..722247ec60 100644
> --- a/xen/arch/arm/include/asm/gic_v4_its.h
> +++ b/xen/arch/arm/include/asm/gic_v4_its.h
> @@ -29,6 +29,26 @@
>  #define GITS_CMD_VINVALL                 0x2d
>  #define GITS_CMD_INVDB                   0x2e
>
> +/* Describes the mapping of a VLPI */
> +struct its_vlpi_map {
> +    struct its_vm       *vm;
> +    unsigned int        vpe_idx;    /* Index of the VPE */
> +    uint32_t            vintid;     /* Virtual LPI number */
> +    bool                db_enabled; /* Is the VPE doorbell to be generated? 
> */
> +    uint8_t             properties;
> +    struct pending_irq  *pirq;
> +    struct its_device   *dev;
> +    uint32_t            eventid;
> +};
> +
> +struct event_vlpi_map {
> +    unsigned int            nr_lpis;
> +    spinlock_t              vlpi_lock;
> +    struct its_vm           *vm;
> +    struct its_vlpi_map     *vlpi_maps;
> +    unsigned int            nr_vlpis;
> +};
> +
>  #endif
>
>  /*
> diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgic.h
> index 77323b2584..360f8a968e 100644
> --- a/xen/arch/arm/include/asm/vgic.h
> +++ b/xen/arch/arm/include/asm/vgic.h
> @@ -70,6 +70,7 @@ struct pending_irq
>       * LPI with the same number in an LR must be from an older LPI, which
>       * has been unmapped before.
>       *
> +     * GIC_IRQ_GUEST_FORWARDED: the IRQ is forwarded to a VCPU(GICv4 only)
>       */
>  #define GIC_IRQ_GUEST_QUEUED   0
>  #define GIC_IRQ_GUEST_ACTIVE   1
> @@ -77,6 +78,7 @@ struct pending_irq
>  #define GIC_IRQ_GUEST_ENABLED  3
>  #define GIC_IRQ_GUEST_MIGRATING   4
>  #define GIC_IRQ_GUEST_PRISTINE_LPI  5
> +#define GIC_IRQ_GUEST_FORWARDED     6
>      unsigned long status;
>      struct irq_desc *desc; /* only set if the irq corresponds to a physical 
> irq */
>      unsigned int irq;
> @@ -95,6 +97,9 @@ struct pending_irq
>       * vgic lock is not going to be enough. */
>      struct list_head lr_queue;
>      bool hw;                    /* Tied to HW IRQ */
> +#ifdef CONFIG_GICV4
> +    struct its_vlpi_map *vlpi_map;
> +#endif
>  };
>
>  #define NR_INTERRUPT_PER_RANK   32
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 576e7fd4b0..94f7dd7d90 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -589,6 +589,14 @@ static int its_discard_event(struct virt_its *its,
>      if ( vlpi == INVALID_LPI )
>          return -ENOENT;
>
> +    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
> +                                        vdevid, vevid);
> +    if ( unlikely(!p) )
> +        return -EINVAL;
> +
> +    if ( pirq_is_tied_to_hw(p) )
> +        if ( gicv4_its_vlpi_unmap(p) )
> +            return -EINVAL;
>      /*
>       * TODO: This relies on the VCPU being correct in the ITS tables.
>       * This can be fixed by either using a per-IRQ lock or by using
> @@ -751,6 +759,27 @@ static int its_handle_mapti(struct virt_its *its, 
> uint64_t *cmdptr)
>
>      vgic_init_pending_irq(pirq, intid, gic_is_gicv4());
>
> +    pirq->lpi_vcpu_id = vcpu->vcpu_id;

pirq->lpi_vcpu_id = vcpu->vcpu_id; is assigned twice (here and again below).
Can we drop the earlier assignment and keep only the one after the
mapping/setup, unless something in between relies on it?

> +
> +    if ( pirq_is_tied_to_hw(pirq) )
> +        /*
> +         * If on GICv4, we could let the VLPI being directly injected
> +         * to the guest. To achieve that, the VLPI must be mapped using
> +         * the VMAPTI command.
> +         */
> +        if ( gicv4_assign_guest_event(its->d, its->doorbell_address, devid,
> +                                      eventid, pirq) )
> +            goto out_remove_mapping;

1. looks like we should jump to "out_remove_host_entry" here, to roll back
the host mapping created by gicv3_assign_guest_event().
2. missing GICv4 rollback on error path: need gicv4_its_vlpi_unmap() before
removing host mapping


Best regards,
Mykola


> +
> +    if ( pirq_is_tied_to_hw(pirq) )
> +        set_bit(GIC_IRQ_GUEST_FORWARDED, &pirq->status);
> +    else
> +        /*
> +         * Mark this LPI as new, so any older (now unmapped) LPI in any LR
> +         * can be easily recognised as such.
> +         */
> +        set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
> +
>      /*
>       * Now read the guest's property table to initialize our cached state.
>       * We don't need the VGIC VCPU lock here, because the pending_irq isn't
> @@ -761,12 +790,6 @@ static int its_handle_mapti(struct virt_its *its, 
> uint64_t *cmdptr)
>          goto out_remove_host_entry;
>
>      pirq->lpi_vcpu_id = vcpu->vcpu_id;
> -    /*
> -     * Mark this LPI as new, so any older (now unmapped) LPI in any LR
> -     * can be easily recognised as such.
> -     */
> -    set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
> -
>      /*
>       * Now insert the pending_irq into the domain's LPI tree, so that
>       * it becomes live.
> @@ -824,6 +847,13 @@ static int its_handle_movi(struct virt_its *its, 
> uint64_t *cmdptr)
>      if ( unlikely(!p) )
>          goto out_unlock;
>
> +    if ( pirq_is_tied_to_hw(p) )
> +    {
> +        ret = gicv4_its_vlpi_move(p, nvcpu);
> +        if ( ret )
> +            goto out_unlock;
> +    }
> +
>      /*
>       * TODO: This relies on the VCPU being correct in the ITS tables.
>       * This can be fixed by either using a per-IRQ lock or by using
> --
> 2.51.2



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.