[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/apicv: fix RTC periodic timer and apicv issue
> From: Xuquan (Quan Xu) [mailto:xuquan8@xxxxxxxxxx] > Sent: Wednesday, October 26, 2016 4:39 PM > > On October 26, 2016 1:20 PM, Tian, Kevin wrote: > >> From: Xuquan (Quan Xu) [mailto:xuquan8@xxxxxxxxxx] > >> Sent: Tuesday, October 25, 2016 4:36 PM > >> > >> On October 24, 2016 3:02 PM, Tian, Kevin wrote: > >> >> From: Xuquan (Quan Xu) [mailto:xuquan8@xxxxxxxxxx] > >> >> Sent: Monday, October 17, 2016 5:28 PM > >> >> > >> >> >> > >> >> >>Back to the main open before holiday - multiple EOIs may come to > >> >> >>clear irq_issued before guest actually handles the very vpt > >> >> >>injection (possible if vpt vector is shared with other sources). > >> >> >>I don't see a good solution on that open... :/ > >> >> >> > >> >> >>We've discussed various options which all fail in one or another > >> >> >>place > >> >> >>- either miss an injection, or incur undesired injections. > >> >> >>Possibly we should consider another direction - fall back to > >> >> >>non-apicv path when we see vpt vector pending but it's not the highest > >one. > >> >> >> > >> >> >>Original condition to enter virtual intr delivery: > >> >> >> else if ( cpu_has_vmx_virtual_intr_delivery && > >> >> >> intack.source != hvm_intsrc_pic && > >> >> >> intack.source != hvm_intsrc_vector ) > >> >> >> > >> >> >>now new condition: > >> >> >> else if ( cpu_has_vmx_virtual_intr_delivery && > >> >> >> intack.source != hvm_intsrc_pic && > >> >> >> intack.source != hvm_intsrc_vector && > >> >> >> (pt_vector == -1 || intack.vector == pt_vector) > >> >> >> ) > >> >> >> > >> >> >>Thoughts? > >> >> >> > >> >> >Kevin, > >> >> >When I try to fix it as your suggestion, I cannot boot the guest, > >> >> >with below message(from xl dmesg): > >> >> > >> >> with Kevin's patch, the hypervisor always calls ' > >> >> vmx_inject_extint() > >> >> -> __vmx_inject_exception()' to inject exception, then vm-entry on > >> >> loop.. > >> >> the interrupt (PT or IPI, or others) can't deliver to guest.. > >> >> > >> >> and so far, we suppress MSR-based APIC suggestion when having > >> >> APIC-V by > >> >> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=7f2e992b824 > >> >> ec6 > >> >> 2a2818e643 > >> >> 90ac2ccfbd74e6b7 > >> >> so I think we couldn't fallback to non-apicv dynamically here.. > >> >> > >> > > >> >What about setting EOI exit bitmap for intack.vector when it's higher > >> >than pending pt_vector? This way we can guarantee there's always a > >> >chance to post pt_vector when pt_vector becomes the highest one... > >> > > >> >(of course you need make later pt_intr_post conditionally then, only > >> >when > >> >intack.vector==pt_vector) > >> > >> Kevin, thanks for your positive reply. I have returned back that > >> server (Intel(R) Xeon(R) CPU E5-2620 v3), so I can't Verify it on > >> time. > >> > >> By my understanding, ''Virtual-Interrupt Delivery'' and "EOI > >> Virtualization" were independent of each other. > >> Even we set setting EOI exit bitmap here to cause EOI-induced VM exit, > >> we still can't guarantee the PT interrupt is delivered to guest > >> from the highest one delivered to the highest one > >EOI-induced VM exit.. > > > >Can you elaborate why you think it doesn't work? I didn't get your point > >here. > >The idea here is that given above situation occurs - multiple pending > >vectors but > >pt_vector is not the highest - set EOI exit bitmap for highest vector. > > I understood your suggestion. > > >Then once > >guest EOI to the highest vector, a VM-exit happens and then if pt_vector > >happens > >to be the next highest vector, you have chance to pt_intr_post before > >resuming > >to the guest. > > > > The gap may be the count (pending_intr_nr) of pending periodic timer > interrupt.. > > > The highest vector is consumed as below: > > a). vIRR to RVI (software, vmx_intr_assist() ) > b). RVI to SVI .etc, then deliver interrupt with Vector through IDT .. > (hardware, > Virtual-Interrupt Delivery ) > c). EOI (hardware, EOI virtualization).. if we set EOI exit bitmap for > highest vector, THEN > cause EOI-induced VM exit.. > > if this is serial execution for each pending interrupt, your suggestion is > working. > > But at step b), after delivered the highest vector, __hardware__ may continue > to deliver > vpt (if highest) to RVI and deliver it to guest as below " Virtual-Interrupt > Delivery ", > and without decreasing the count (pending_intr_nr) of pending periodic timer > interrupt.. > > (( if Xen doesn't update periodic timer interrupt bit set > in VIRR to guest interrupt status (RVI) directly, Xen is not aware of this > case to decrease the count (pending_intr_nr) of pending periodic timer > interrupt, > then Xen will deliver a periodic timer interrupt again)) > > .. > Virtual-Interrupt Delivery >>>: > Vector ← RVI; > VISR[Vector] ← 1; > SVI ← Vector; > VPPR ← Vector & F0H; > VIRR[Vector] ← 0; > IF any bits set in VIRR > THEN RVI ← highest index of bit set in VIRR (___if vpt is the highest > index___) updating RVI doesn't mean delivering the virtual interrupt. There is no difference from how software updates that field before resuming to guest. > ELSE RVI ← 0; > FI; > deliver interrupt with Vector through IDT; > cease recognition of any pending virtual interrupt; please note recognition is ceased here. If you check 29.2.1 Evaluation of pending virtual interrupts, virtual interrupt is evaluated at VM entry, TPR virtualization, EOI virtualization, self-IPI virtualization, and posted-interrupt processing. and evaluation is as below: IF “interrupt-window exiting” is 0 AND RVI[7:4] > VPPR[7:4] (see Section 29.1.1 for definition of VPPR) THEN recognize a pending virtual interrupt; ELSE do not recognize a pending virtual interrupt; FI; Even an evaluation is triggered when highest vector is being handled (before EOI), RVI (with pt_vector) is smaller than VPPR (which is the current highest vector). So no injection of pt_vector will happen. Then later when highest vector is EOI-ed, a VM-exit happens immediately... Thanks Kevin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |