[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 06/11] x86/hvm: allowing registering EOI callbacks for GSIs



On 30.09.2020 12:41, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -595,6 +595,66 @@ int hvm_local_events_need_delivery(struct vcpu *v)
>      return !hvm_interrupt_blocked(v, intack);
>  }
>  
> +int hvm_gsi_register_callback(struct domain *d, unsigned int gsi,
> +                              struct hvm_gsi_eoi_callback *cb)
> +{
> +    if ( gsi >= hvm_domain_irq(d)->nr_gsis )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return -EINVAL;
> +    }
> +
> +    write_lock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +    list_add(&cb->list, &hvm_domain_irq(d)->gsi_callbacks[gsi]);
> +    write_unlock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +
> +    return 0;
> +}
> +
> +void hvm_gsi_unregister_callback(struct domain *d, unsigned int gsi,
> +                                 struct hvm_gsi_eoi_callback *cb)
> +{
> +    struct list_head *tmp;

This could be const if you used ...

> +    if ( gsi >= hvm_domain_irq(d)->nr_gsis )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return;
> +    }
> +
> +    write_lock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +    list_for_each ( tmp, &hvm_domain_irq(d)->gsi_callbacks[gsi] )
> +        if ( tmp == &cb->list )
> +        {
> +            list_del(tmp);

... &cb->list here.

> +            break;
> +        }
> +    write_unlock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +}
> +
> +void hvm_gsi_execute_callbacks(unsigned int gsi, void *data)
> +{
> +    struct domain *currd = current->domain;
> +    struct hvm_gsi_eoi_callback *cb;
> +
> +    read_lock(&hvm_domain_irq(currd)->gsi_callbacks_lock);
> +    list_for_each_entry ( cb, &hvm_domain_irq(currd)->gsi_callbacks[gsi],
> +                          list )
> +        cb->callback(gsi, cb->data ?: data);

Are callback functions in principle permitted to unregister
themselves? If so, you'd need to use list_for_each_entry_safe()
here.

What's the idea of passing cb->data _or_ data?

Finally here and maybe in a few more places latch hvm_domain_irq()
into a local variable?

> +    read_unlock(&hvm_domain_irq(currd)->gsi_callbacks_lock);
> +}
> +
> +bool hvm_gsi_has_callbacks(struct domain *d, unsigned int gsi)

I think a function like this would want to have all const inputs,
and it looks to be possible thanks to hvm_domain_irq() yielding
a pointer.

> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -393,6 +393,7 @@ static void eoi_callback(unsigned int vector, void *data)
>          for ( pin = 0; pin < vioapic->nr_pins; pin++ )
>          {
>              union vioapic_redir_entry *ent = &vioapic->redirtbl[pin];
> +            unsigned int gsi = vioapic->base_gsi + pin;
>  
>              if ( ent->fields.vector != vector )
>                  continue;
> @@ -402,13 +403,17 @@ static void eoi_callback(unsigned int vector, void 
> *data)
>              if ( is_iommu_enabled(d) )
>              {
>                  spin_unlock(&d->arch.hvm.irq_lock);
> -                hvm_dpci_eoi(vioapic->base_gsi + pin, ent);
> +                hvm_dpci_eoi(gsi, ent);
>                  spin_lock(&d->arch.hvm.irq_lock);
>              }
>  
> +            spin_unlock(&d->arch.hvm.irq_lock);
> +            hvm_gsi_execute_callbacks(gsi, ent);
> +            spin_lock(&d->arch.hvm.irq_lock);

Iirc on an earlier patch Paul has already expressed concern about such
transient unlocking. At the very least I'd expect the description to
say why this is safe. One particular question would be in how far what
ents points to can't change across this window, disconnecting the uses
of it in the 1st locked section from those in the 2nd one.

> @@ -620,7 +628,7 @@ static int ioapic_load(struct domain *d, 
> hvm_domain_context_t *h)
>           * Add a callback for each possible vector injected by a redirection
>           * entry.
>           */
> -        if ( vector < 16 || !ent->fields.remote_irr ||
> +        if ( vector < 16 ||
>               (delivery_mode != dest_LowestPrio && delivery_mode != 
> dest_Fixed) )
>              continue;

I'm having trouble identifying what this gets replaced by.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.