[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/5] x86/vioapic: bind interrupts to PVH Dom0



>>> On 27.03.17 at 12:44, <roger.pau@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -199,6 +199,34 @@ static void vioapic_write_redirent(
>          unmasked = unmasked && !ent.fields.mask;
>      }
>  
> +    if ( is_hardware_domain(d) && unmasked )
> +    {
> +        xen_domctl_bind_pt_irq_t pt_irq_bind = {
> +            .irq_type = PT_IRQ_TYPE_GSI,
> +            .machine_irq = gsi,
> +            .u.gsi.gsi = gsi,
> +            .hvm_domid = DOMID_SELF,
> +        };
> +        int ret, pirq = gsi;
> +
> +        /* Interrupt has been unmasked, bind it now. */
> +        ret = mp_register_gsi(gsi, ent.fields.trig_mode, 
> ent.fields.polarity);
> +        if ( ret && ret != -EEXIST )
> +        {
> +            gdprintk(XENLOG_WARNING,
> +                     "%s: error registering GSI %u: %d\n", __func__, gsi, 
> ret);
> +        }
> +        if ( !ret )
> +        {
> +            ret = physdev_map_pirq(DOMID_SELF, MAP_PIRQ_TYPE_GSI, &pirq, 
> &pirq,
> +                                   NULL);

With this call you basically admit that PVH can't really do without
physdev ops, just that you hide it behind IO-APIC RTE writes.
Along the lines of the comment on the previous patch I wonder
though whether you really need to use this function, i.e.
whether you can't instead get away with little more than the
call to map_domain_pirq() which that function does.

> +            BUG_ON(ret);

You absolutely don't want to bring down the entire system if a
failure occurs here or ...

> +            ret = pt_irq_create_bind(d, &pt_irq_bind);
> +            BUG_ON(ret);

... here. Probably the best you can do besides issuing a log
message is to mask the RTE.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.