[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/5] Add MSI support to XEN



Oh yes, that is true. They then have special logic for detecting nested
delivery and mask/unmask in that case. Fair enough, and similar to what we
should do in Xen.

 -- Keir

On 28/3/08 12:15, "Espen Skoglund" <espen.skoglund@xxxxxxxxxxxxx> wrote:

> Just checked this.  Linux does the local APIC EOI on ->ack().
> 
> eSk
> 
> 
> [Keir Fraser]
>> I think Linux EOIs on ->end() not on ->ack(). Which is fine since
>> Linux doesn't defer or otherwise schedule ISR handlers.
> 
>>  -- Keir
> 
>> On 28/3/08 11:37, "Espen Skoglund" <espen.skoglund@xxxxxxxxxxxxx> wrote:
> 
>>> That is true.  I was quite puzzled with the requirement of the
>>> callback into Xen myself.  In standard Linux MSI interrupts are
>>> treated as edge triggered and are just acked in the local APIC upon
>>> delivery.
>>> 
>>> eSk
>>> 
>>> 
>>> 
>>> [Keir Fraser]
>>>> This requires the guest to call back into Xen to signal EOI (as we already
>>>> do for legacy level-triggered interrupts). We shouldn't really need to do
>>>> that for MSI and it's rather more expensive than a couple of accesses over
>>>> the PCI bus!
>>> 
>>>> It's this callback into Xen, which we do not really understand why it's
>>>> needed, which I'm railing against. Is there some fundamental aspect of MSI
>>>> we do not understand, or are we working around one brain-dead or buggy
>>>> device?
>>> 
>>>> -- Keir
>>> 
>>>> On 28/3/08 01:48, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>>> 
>>>>> Not masking each time when interrupt happen, instead, we do that only
>>>>> when the second interrupt happen while the previous one is still
>>>>> pending, it should be something like handle_edge_irqs() in upstream
>>>>> linux.
>>>>> 
>>>>> -- Yunhong Jiang
>>>>> 
>>>>> Espen Skoglund <mailto:espen.skoglund@xxxxxxxxxxxxx> wrote:
>>>>>> Preventing interrupt storms by masking the interrupt in the MSI/MSI-X
>>>>>> capabilty structure or MSI-X table within the interrupt handler is
>>>>>> insane.  It requires accesses over the PCI/PCIe bus and is clearly
>>>>>> something you want to avoid on the fast path.
>>>>>> 
>>>>>> eSk
>>>>>> 
>>>>>> 
>>>>>> [Haitao Shan]
>>>>>>> There are no much changes made compared with the original
>>>>> patches.
>>>>>>> But there do have some issues that we need your kind comments.
>>>>>> 
> 1> ACK-NEW method is necessary to avoid IRQ storm. But it causes
>>>>> the
>>>>>>> deadlock. During my tests, I do find there can be deadlock
>>>>> with
>>>>>>> patches applied. When assigned a NIC device to HVM domain, the
>>>>> scenario
>>>>>>> is: Dom0 is waiting to IDE interrupt (vector 0x21); HVM domain is
>>>>> waiting
>>>>>>> for qemu's IDE emulation and thus blocked; NIC interrupt (MSI vector
>>>>> 0x31)
>>>>>>> is waiting for injection to HVM domain since it is blocked now; IDE
>>>>>>> interrupt is waiting for NIC interrupt since NIC interrupt is of high
>>>>>>> priority but not ACKed by XEN now. When IDE interrupt and NIC
>>>>> interrupt
>>>>>>> are delivered to the same CPU, and when guest OS is Vista, the
>>>>>>> phenomenon is easy to be observed.
>>>>>> 
> 2> Without ACK-NEW, some naughty NIC devices as we observed will
>>>>>>> bring IRQ storms. For this phenomenon, I think Yunhong can comment
>>>>> more.
>>>>>>> Basically, writing EOI without mask the source of MSI will bring IRQ
>>>>>>> storm. Although the reason is under investigation, XEN should anyhow
>>>>>>> handle such bogous device, right?
>>>>>> 
> 3> Using ACK-OLD and masking the MSI when writing EOI can be
>>>>>>> solution. However, XEN does not own PCI configuration spaces.
>>>>> 
>>>>> _______________________________________________
>>>>> Xen-devel mailing list
>>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>>> http://lists.xensource.com/xen-devel
>>> 
>>> 
>>> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.