[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] VT-d Posted-interrupt (PI) design for XEN
> -----Original Message----- > From: Tim Deegan [mailto:tim@xxxxxxx] > Sent: Friday, March 06, 2015 5:44 PM > To: Wu, Feng > Cc: Jan Beulich; Zhang, Yang Z; Tian, Kevin; xen-devel@xxxxxxxxxxxxx > Subject: Re: [Xen-devel] VT-d Posted-interrupt (PI) design for XEN > > At 02:07 +0000 on 06 Mar (1425604054), Wu, Feng wrote: > > > From: Tim Deegan [mailto:tim@xxxxxxx] > > > But I don't understand why we would need a new global vector for > > > RUNSTATE_blocked rather than suppressing the posted interrupts as you > > > suggest for RUNSTATE_runnable. (Or conversely why not use the new > > > global vector for RUNSTATE_runnable too?) > > > > If we suppress the posted-interrupts when vCPU is blocked, it cannot > > be unblocked by the external interrupts, this is not correct. > > OK, I don't understand at all now. :) When the posted interrupt is > suppressed, what happens to the interrupt? When the posted interrupt is suppressed, VT-d engine will not issue notification events. > If it's just dropped, then we can't use that for _any_ cases. We can suppress the posted-interrupt when vCPU is waiting in the runqueue (vCPU is in RUNSTATE_runnable state), it is not needed to send notification event when vCPU is in this state, since when interrupt happens, the interrupt information are not _dropped_, instead, they are stored in PIR, and this will be synced to vIRR before VM-Entry. > If it goes through the old path, > via the vlapic, that should be enough to wake any HLT'ed vcpu. It > sounds like it might be a little slower, but not necessarily once > you've had to add a new list of potentially-HLT'd-and-wakeable vcpus, > especially with many idle vcpus. When Posted-interrupt is used, how to go to the old path? Thanks, Feng Thanks, Feng > > Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |