[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again



On Fri, Sep 05, 2008 at 01:11:41PM -0400, Steve Ofsthun wrote:

> >>#ifdef IRQ0_SPECIAL_ROUTING
> >>   /* Force round-robin to pick VCPU 0 */
> >>   if ( ((irq == hvm_isa_irq_to_gsi(0)) && pit_channel0_enabled()) ||
> >>        is_hvm_callback_irq(vioapic, irq) )
> >>       deliver_bitmask = (uint32_t)1;
> >>#endif
> >
> >Yes, please - Solaris 10 PV drivers are buggy in that they use the
> >current VCPUs vcpu_info. I just found this bug, and it's getting fixed,
> >but if this makes sense anyway, it'd be good.
> 
> I can submit a patch for this, but we feel this is something of a hack.  

Yep.

> We'd like to provide a more general mechanism for allowing event channel 
> binding to "work" for HVM guests.  But to do this, we are trying to address 
> conflicting goals.  Either we honor the event channel binding by 
> circumventing the IOAPIC emulation, or we faithfully emulate the IOAPIC and 
> circumvent the event channel binding.

Well, this doesn't really make sense anyway as is: the IRQ binding has little
to do with where the evtchns are handled (I don't think there's any
requirement that they both happen on the same CPU).

> Our driver writers would like to see support for multiple callback IRQs.  
> Then particular event channel interrupts could be bound to particular IRQs. 
> This would allow PV device interrupts to be distributed intelligently.  It 
> would also allow net and block interrupts to be disentangled for Windows PV 
> drivers.

You could do a bunch of that just by distributing them from the single
callback IRQ. But I suppose it would be nice to move to a
one-IRQ-per-evtchn model. You'd need to keep the ABI of course, so you'd
need a feature flag or something.

> We deal pretty much exclusively with HVM guests, do SMP PV environments 
> selectively bind device interrupts to different VCPUs?

For true PV you can bind evtchns at will.

regards
john

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.