[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)


  • To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Guy Zana" <guy@xxxxxxxxxxxx>
  • Date: Fri, 10 Aug 2007 06:10:56 -0400
  • Cc: Alex Novik <alex@xxxxxxxxxxxx>
  • Delivery-date: Fri, 10 Aug 2007 03:20:42 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcfarSF45lhECFFTQSiE3oBAG7IKmQASGmqAAA5CKCA=
  • Thread-topic: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)

Thanks Kevin for all of your comments, I agree with them all.
First, most the work here was done by Alex Novik, not me :)

More comments below...

Thanks,
Guy.

> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx] 
> Sent: Friday, August 10, 2007 5:59 AM
> To: Guy Zana; xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Alex Novik
> Subject: RE: [Xen-devel] [RFC] Pass-through Interdomain 
> Interrupts Sharing(HVM/Dom0)
> 
> Hi, Guy,
>       Thanks for very good description.
> 
>       Basically I think this should work, but with following concerns:
> 
> - How to choose the timeout value?
>       Small timeout may result more spurious injection and 
> performance penalty, while large timeout may not satisfy 
> driver expectation to high-speed device.
> 

That's a good point. The Spurious vs Starving is exactly opposite between the 
HVM and dom0. For an HVM that holds a vline, when you have a large timeout 
value it'll result in more spurious interrupts since you hold the line asserted.

The timeout value could be adaptive, increased (made slower) anytime it fires 
and it decides to do nothing and decreased anytime it take decisions. This may 
complicate things even further.

Does the IOAPIC has a timeout value to fire an interrupt when the line is held 
asserted? Is using that value feasible?
Freezing the timer is logically the same as masking the IOAPIC.

> - How to cope with existing irq sharing mechanism for PV 
> driver domain?
>       Existing approach between PV driver domain and dom0 is 
> based on some trigger point, i.e, guest EOI. Keep insertion 
> count and track guest response. Timeout mechanism is 
> different, and I guess two paths are difficult to share logic.
> 
>       How about a mixed sharing case, say among dom0/PV 
> domain/ HVM domain?

Sharing is problematic between multiple domains, at least when you have an HVM 
involved. I guess that it is infrequently that you'll want to assign more than 
two devices sharing the same line to different domains other than dom0, I look 
at the M devices left to dom0 more as a nuisance.

Didn't give a lot of thought to that but you can probably allow PV domains in 
the shared interdomain ISR chain proposed. Injecting the interrupt to all of 
the PV domains & dom0 (simultaneously) and ORed their handling status result. 
Take actions based on that value. Sharing a line between 2 or more HVMs is much 
more difficult to solve.

> 
> - interrupt delay within HVM may be exaggerated under some 
> special condition, if HVM is not ready to handle the 
> injection at D.3 (like blocked in I/O emulation) while later 
> D.4 will cancel previous injection at next timeout. Then only 
> at next D.3 HVM gets re-injection again and it may or may not 
> be delayed again upon status at that time.

I'm not sure I understood -

In a D3 -> D4 -> D3 event cycle the HVM's vline is staying asserted. Dom0 
always gets a chance to check out if the interrupt is his, but the vline stays 
asserted until dom0 handled it or until the pline is deasserted.

The HVM will be ready when it will unmask the IOAPIC's pin, and it's VCPU will 
be executing.
It doesn't matter if you choose to assert or deassert its vline. In the 
meantime the timer will fire and that will create spurious interrupts in dom0 
eventually. But an assumption we took is that we can't avoid spurious 
interrupts and we rather get them in dom0.

> 
>       Did you run some heavy workload and observe any complains?

We didn’t implement it yet :-)

Thanks for the great comments!

Guy.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.