[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/3] x86/physdev: factor out the code to allocate and map a pirq



>>> On 19.04.17 at 17:11, <roger.pau@xxxxxxxxxx> wrote:
> +int allocate_and_map_gsi_pirq(struct domain *d, int *index, int *pirq_p)
> +{
> +    int irq, pirq, ret;
> +
> +    if ( *index < 0 || *index >= nr_irqs_gsi )
> +    {
> +        dprintk(XENLOG_G_ERR, "dom%d: map invalid irq %d\n", d->domain_id,
> +                *index);
> +        return -EINVAL;
> +    }
> +
> +    irq = domain_pirq_to_irq(current->domain, *index);
> +    if ( irq <= 0 )
> +    {
> +        if ( is_hardware_domain(current->domain) )
> +            irq = *index;
> +        else {

Please adjust coding style issues like the brace placement here.

> +    pcidevs_lock();
> +    /* Verify or get pirq. */
> +    spin_lock(&d->event_lock);
> +    pirq = domain_irq_to_pirq(d, irq);
> +
> +    if ( *pirq_p < 0 )
> +    {
> +        if ( pirq )
> +        {
> +            dprintk(XENLOG_G_ERR, "dom%d: %d:%d already mapped to %d\n",
> +                    d->domain_id, *index, *pirq_p, pirq);
> +            if ( pirq < 0 )
> +            {
> +                ret = -EBUSY;
> +                goto done;
> +            }
> +        }
> +        else
> +        {
> +            pirq = get_free_pirq(d, MAP_PIRQ_TYPE_GSI);
> +            if ( pirq < 0 )
> +            {
> +                dprintk(XENLOG_G_ERR, "dom%d: no free pirq\n", d->domain_id);
> +                ret = pirq;
> +                goto done;
> +            }
> +        }
> +    }
> +    else
> +    {
> +        if ( pirq && pirq != *pirq_p )
> +        {
> +            dprintk(XENLOG_G_ERR, "dom%d: pirq %d conflicts with irq %d\n",
> +                    d->domain_id, *index, *pirq_p);
> +            ret = -EEXIST;
> +            goto done;
> +        }
> +        else
> +            pirq = *pirq_p;
> +    }
> +
> +    ret = map_domain_pirq(d, pirq, irq, MAP_PIRQ_TYPE_GSI, NULL);
> +    if ( ret == 0 )
> +        *pirq_p = pirq;
> +
> + done:
> +    spin_unlock(&d->event_lock);
> +    pcidevs_unlock();

All of the code above is being repeated in allocate_and_map_msi_pirq(),
merely with the multi-MSI addition. This is too much code duplication for
my taste. Additionally, with it being split like this it is then questionable
what acquiring the PCI devices lock is good for here (I would think it is
needed at most in the MSI case).

> +int allocate_and_map_msi_pirq(struct domain *d, int *index, int *pirq_p,
> +                              struct msi_info *msi)
> +{
> +    int irq, pirq, ret, type;
> +
> +    irq = *index;
> +    if ( irq == -1 || msi->entry_nr > 1 )
> +        irq = create_irq(NUMA_NO_NODE);

This doesn't look to be an exact equivalent of the original code: Even
with MAP_PIRQ_TYPE_MULTI_MSI entry_nr may be 1, and the original
code calls create_irq() also in that case. If this is an intended change,
the rationale should be provided in the commit message. But as you
don't alter map_domain_pirq(), I doubt this is correct.

> +    if ( irq < nr_irqs_gsi || irq >= nr_irqs )
> +    {
> +        dprintk(XENLOG_G_ERR, "dom%d: can't create irq for msi!\n",
> +                d->domain_id);
> +        ret = -EINVAL;
> +        goto free_irq;
> +    }
> +
> +    msi->irq = irq;
> +    type = (msi->entry_nr > 1 && !msi->table_base) ? MAP_PIRQ_TYPE_MULTI_MSI
> +                                                   : MAP_PIRQ_TYPE_MSI;

Same here - you're not necessarily reconstructing the type passed
into the hypercall.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.