[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing (HVM/Dom0)


  • To: Guy Zana <guy@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxxxxxxxx>
  • Date: Fri, 10 Aug 2007 12:21:47 +0100
  • Cc: Alex Novik <alex@xxxxxxxxxxxx>
  • Delivery-date: Fri, 10 Aug 2007 04:22:25 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcfarSF45lhECFFTQSiE3oBAG7IKmQAbyytWAAAcBIUABr4MgAACO0Od
  • Thread-topic: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing (HVM/Dom0)

On 10/8/07 11:22, "Guy Zana" <guy@xxxxxxxxxxxx> wrote:

>> My thought here is a simple priority list with move-to-back
>> of the frontmost domain when we deliver him the interrupt but
>> he does not deassert the line either in reasonable time or by
>> the time he EOIs the interrupt. This is simple generic logic
>> needing no PV guest changes.
> 
> Even if the HVM handled the interrupt successfully, it doesn't mean that the
> pline will be deasserted (if another device assigned to another domain
> asserted it while the HVM processed the interrupt).You can't tell whether the
> HVM handled the interrupt successfully or not. How this method overcome this?

It would cycle through the priority list, moving frontmost to back at each
stage, until the line is deasserted.

> Btw, with the method we proposed you could add PV domains to the interdomain
> ISR chain, but it may not contain more than one HVM.

Well, that kind of sucks doesn't it. And yet your method is significantly
more complicated than my approach, at least as described in your email.
Simple and more general wins the day, unless your approach handles more
cases or has better performance?

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.