[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 09/11] vpci/msi: add MSI handlers



>>> On 19.09.17 at 17:29, <roger.pau@xxxxxxxxxx> wrote:
> Add handlers for the MSI control, address, data and mask fields in
> order to detect accesses to them and setup the interrupts as requested
> by the guest.
> 
> Note that the pending register is not trapped, and the guest can
> freely read/write to it.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

I wonder how valid this can be with ...

> Changes since v5:
>  - Update to new lock usage.
>  - Change handlers to match the new type.
>  - s/msi_flags/msi_gflags/, remove the local variables and use the new
>    DOMCTL_VMSI_* defines.
>  - Change the MSI arch function to take a vpci_msi instead of a
>    vpci_arch_msi as parameter.
>  - Fix the calculation of the guest vector for MSI injection to take
>    into account the number of bits that can be modified.
>  - Use INVALID_PIRQ everywhere.
>  - Simplify exit path of vpci_msi_disable.
>  - Remove the conditional when setting address64 and masking fields.
>  - Add a process_pending_softirqs to the MSI dump loop.
>  - Place the prototypes for the MSI arch-specific functions in
>    xen/vpci.h.
>  - Add parentheses around the INVALID_PIRQ definition.

... this set of changes.

> +void vpci_msi_arch_mask(struct vpci_msi *msi, const struct pci_dev *pdev,
> +                        unsigned int entry, bool mask)
> +{
> +    const struct pirq *pinfo;
> +    struct irq_desc *desc;
> +    unsigned long flags;
> +    int irq;
> +
> +    ASSERT(msi->arch.pirq >= 0 && entry < msi->vectors);
> +    pinfo = pirq_info(pdev->domain, msi->arch.pirq + entry);
> +    if ( !pinfo )
> +        return;
> +
> +    irq = pinfo->arch.irq;
> +    if ( irq >= nr_irqs || irq < 0)

Style. However, ...

> +        return;
> +
> +    desc = irq_to_desc(irq);
> +    if ( !desc )
> +        return;
> +
> +    spin_lock_irqsave(&desc->lock, flags);

... didn't I comment on this already suggesting to use
domain_spin_lock_irq_desc() instead of open coding it?

> +static void vpci_msi_enable(const struct pci_dev *pdev, struct vpci_msi *msi,
> +                            unsigned int vectors)
> +{
> +    int ret;
> +
> +    ASSERT(!msi->enabled);
> +    ret = vpci_msi_arch_enable(msi, pdev, vectors);
> +    if ( ret )
> +        return;
> +
> +    /* Apply the mask bits. */
> +    if ( msi->masking )
> +    {
> +        unsigned int i;
> +        uint32_t mask = msi->mask;
> +
> +        for ( i = ffs(mask) - 1; mask && i < vectors; i = ffs(mask) - 1 )
> +        {
> +            vpci_msi_arch_mask(msi, pdev, i, true);
> +            __clear_bit(i, &mask);
> +        }
> +    }
> +
> +    __msi_set_enable(pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
> +                     PCI_FUNC(pdev->devfn), msi->pos, 1);

This is very unlikely to be a function that arch-independent code is
permitted to call.

> +void vpci_dump_msi(void)
> +{
> +    struct domain *d;

const?

> +    for_each_domain ( d )

You need to rcu_read_lock(&domlist_read_lock) in order to validly use
this construct.

> +    {
> +        const struct pci_dev *pdev;
> +
> +        if ( !has_vpci(d) )
> +            continue;
> +
> +        printk("vPCI MSI information for d%d\n", d->domain_id);
> +
> +        list_for_each_entry ( pdev, &d->arch.pdev_list, domain_list )
> +        {
> +            uint8_t seg = pdev->seg, bus = pdev->bus;
> +            uint8_t slot = PCI_SLOT(pdev->devfn), func = 
> PCI_FUNC(pdev->devfn);
> +            const struct vpci_msi *msi = pdev->vpci->msi;
> +
> +            if ( !spin_trylock(&pdev->vpci->lock) )
> +            {
> +                printk("Unable to get vPCI lock, skipping\n");
> +                continue;
> +            }
> +
> +            if ( msi )
> +            {
> +                printk("Device %04x:%02x:%02x.%u\n", seg, bus, slot, func);
> +
> +                printk("  Enabled: %u Supports masking: %u 64-bit addresses: 
> %u\n",
> +                       msi->enabled, msi->masking, msi->address64);

bool wants to be printed with %d, I think.

> +                printk("  Max vectors: %u enabled vectors: %u\n",
> +                       msi->max_vectors, msi->vectors);
> +
> +                vpci_msi_arch_print(msi);
> +
> +                if ( msi->masking )
> +                    printk("  mask=%08x\n", msi->mask);

Is this really worth a separate line? Also please don't separately print
->making as its value will be known from the presence of the field here.

Overall please try to shorten messages such that they're still
meaningful but don't cause unnecesary load on the serial line or extra
wasted space in the ring buffer.

> --- a/xen/include/xen/vpci.h
> +++ b/xen/include/xen/vpci.h
> @@ -72,6 +72,30 @@ struct vpci {
>          } bars[7]; /* At most 6 BARS + 1 expansion ROM BAR. */
>          /* FIXME: currently there's no support for SR-IOV. */
>      } header;
> +
> +    /* MSI data. */
> +    struct vpci_msi {
> +        /* Arch-specific data. */
> +        struct vpci_arch_msi arch;
> +        /* Address. */
> +        uint64_t address;
> +        /* Offset of the capability in the config space. */
> +        unsigned int pos;
> +        /* Maximum number of vectors supported by the device. */
> +        unsigned int max_vectors;
> +        /* Number of vectors configured. */
> +        unsigned int vectors;
> +        /* Mask bitfield. */
> +        uint32_t mask;
> +        /* Data. */
> +        uint16_t data;
> +        /* Enabled? */
> +        bool enabled;
> +        /* Supports per-vector masking? */
> +        bool masking;
> +        /* 64-bit address capable? */
> +        bool address64;
> +    } *msi;
>  #endif
>  };

As there may be quite a few instance of this structure, please strive to
keep its size down. Many of the fields above have a pretty limited valid
value range and hence would benefit from using more narrow types and/or
bitfields.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.