[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 104131: regressions - FAIL



On February 13, 2017 4:21 PM, Tian, Kevin wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: Wednesday, February 08, 2017 4:52 PM
>>
>> >>> On 08.02.17 at 09:27, <xuquan8@xxxxxxxxxx> wrote:
>> > Assumed vCPU is in guest_mode..
>> > When apicv is enabled, hypervisor calls vmx_deliver_posted_intr(),
>> > then
>> > __vmx_deliver_posted_interrupt() to deliver interrupt, but no vmexit
>> > (also no
>> > vcpu_kick() )..
>> > In __vmx_deliver_posted_interrupt(), it is __conditional__ to
>> > deliver posted interrupt. if posted interrupt is not delivered, the
>> > posted interrupt is pending until next VM entry -- by PIR to vIRR..
>> >
>> > one condition is :
>> > In __vmx_deliver_posted_interrupt(),  ' if (
>> > !test_and_set_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))' ..
>> >
>> > Specifically, we did verify it by RES interrupt, which is used for
>> > smp_reschedule_interrupt..
>> > We even cost more time to deliver RES interrupt than no-apicv in
>average..
>> >
>> > If RES interrupt (no. 1) is delivered by posted way (the vcpu is
>> > still guest_mode).. when tries to deliver next-coming RES interrupt
>> > (no. 2) by posted way, The next-coming RES interrupt (no. 2) is not
>> > delivered, as we set the VCPU_KICK_SOFTIRQ bit when we deliver RES
>> > interrupt (no. 1)..
>> >
>> > Then the next-coming RES interrupt (no. 2) is pending until next VM
>> > entry -- by PIR to vIRR..
>> >
>> >
>> > We can fix it as below(I don't think this is a best one, it is
>> > better to set the VCPU_KICK_SOFTIRQ bit, but not test it):
>> >
>> > --- a/xen/arch/x86/hvm/vmx/vmx.c
>> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> > @@ -1846,7 +1846,7 @@ static void
>__vmx_deliver_posted_interrupt(struct vcpu *v)
>> >      {
>> >          unsigned int cpu = v->processor;
>> >
>> > -        if ( !test_and_set_bit(VCPU_KICK_SOFTIRQ,
>&softirq_pending(cpu))
>> > +        if ( !test_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))
>> >               && (cpu != smp_processor_id()) )
>> >              send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
>> >      }
>>
>> While I don't think I fully understand your description, the line you
>> change here has always been puzzling me: If we were to raise a softirq
>> here, we ought to call cpu_raise_softirq() instead of partly open
>> coding what it does.
>
>We require posted_intr_vector for target CPU to ack/deliver virtual
>interrupt in non-root mode. cpu_raise_softirq uses a different vector, which
>cannot trigger such effect.
>


Kevin,

I can't follow this 'to ack'..
As I understand, the posted_intr_vector is to call event_check_interrupt() [ or 
pi_notification_interrupt() ] to writes zero to the EOI register in the local 
APIC --
this dismisses the interrupt with the posted interrupt notification vector from 
the local APIC.

What does this ack refer to?






>> So I think not marking that softirq
>> pending (but doing this incompletely) is a valid change in any case.
>> But I'll have to defer to Kevin in the hopes that he fully understands
>> what you explain above as well as him knowing why this was a
>> test-and-set here in the first place.
>>
>
>I agree we have a misuse of softirq mechanism here. If guest is already in
>non-root mode, the 1st posted interrupt will be directly delivered to guest
>(leaving softirq being set w/o actually incurring a VM-exit - breaking desired
>softirq behavior). Then further posted interrupts will skip the IPI, stay in 
>PIR
>and not noted until another VM-exit happens. Looks Quan observes such
>delay of delivery in his experiments.
>
>I'm OK to remove the set here. Actually since it's an optimization for less
>IPIs, we'd better check softirq_pending(cpu) directly instead of sticking to
>one bit only.
>
>Thanks
>Kevin
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.