[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] Re: [Xen-devel] Xen crash on dom0 shutdown



xen-devel-bounces@xxxxxxxxxxxxxxxxxxx <> wrote:
> On 24/9/08 09:59, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>>> Well, this hypercall doesn't do pirq_guest_unbind() on IO-APIC-routed
>>> interrupts either, so I think the problem is wider ranging than just MSI
>>> interrupts. Consider unmap_irq() followed by pirq_guest_unbind() later.
>>> We'll BUG_ON(vector == 0) in the latter function. Looks a bit of a mess to
>>> me. I would say that if there are bindings remaining we should re-direct
>>> the event-channel to a 'no-op' pirq (e.g., -1) and indeed call
>>> pirq_guest_unbind() or similar.
>>
>> How about this one? It doesn't exactly follow the path you suggested,
>> i.e. doesn't mess with event channels, but rather marks the
>> irq<->vector mapping as invalid, allowing only a subsequent event
>> channel unbind (i.e. close) to recover from that state (which seems
>> better in terms of requiring proper discipline in the guest, as it
>> prevents re-using the irq or vector without properly cleaning up).
>
> Yeah, this is the kind of thing I had in mind. I will work on
> this a bit
> more (e.g., need to synchronise on d->evtchn_lock to avoid racing
> EVTCHNOP_bind_pirq; also I'm afraid about leaking MSI vectors on domain
> destruction). Thanks.

Sorry to notice that the vector is not managed at all :$
Currently assign_irq_vector() only checks IO_APIC_VECTOR(irq), while for 
AUTO_ASSIGN situation, there is no management at all.
I'm considering if we can check the irq_desc[vector]'s handler to see if the 
vector has assigned or not.

Also noticed following snipt in  setupOneDevice in 
python/xen/xend/server/pciif.py, I suspect it should have less space before it. 
Also maybe now it is better to be placed under pciback.
           rc = xc.physdev_map_pirq(domid = fe_domid,
                                   index = dev.irq,
                                   pirq  = dev.irq)

>
> -- Keir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.