[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 104131: regressions - FAIL



On Wed, Feb 08, 2017 at 10:15:28AM +0000, Xuquan (Quan Xu) wrote:
>On February 08, 2017 4:52 PM, Jan Beulich wrote:
>>>>> On 08.02.17 at 09:27, <xuquan8@xxxxxxxxxx> wrote:
>>> Assumed vCPU is in guest_mode..
>>> When apicv is enabled, hypervisor calls vmx_deliver_posted_intr(),
>>> then
>>> __vmx_deliver_posted_interrupt() to deliver interrupt, but no vmexit
>>> (also no
>>> vcpu_kick() )..
>>> In __vmx_deliver_posted_interrupt(), it is __conditional__ to deliver
>>> posted interrupt. if posted interrupt is not delivered, the posted
>>> interrupt is pending until next VM entry -- by PIR to vIRR..
>>>
>>> one condition is :
>>> In __vmx_deliver_posted_interrupt(),  ' if (
>>> !test_and_set_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))' ..
>>>
>>> Specifically, we did verify it by RES interrupt, which is used for
>>> smp_reschedule_interrupt..
>>> We even cost more time to deliver RES interrupt than no-apicv in
>>average..
>>>
>>> If RES interrupt (no. 1) is delivered by posted way (the vcpu is still
>>> guest_mode).. when tries to deliver next-coming RES interrupt (no. 2)
>>> by posted way, The next-coming RES interrupt (no. 2) is not delivered,
>>> as we set the VCPU_KICK_SOFTIRQ bit when we deliver RES interrupt (no.
>>> 1)..
>>>
>>> Then the next-coming RES interrupt (no. 2) is pending until next VM
>>> entry -- by PIR to vIRR..
>>>
>>>
>>> We can fix it as below(I don't think this is a best one, it is better
>>> to set the VCPU_KICK_SOFTIRQ bit, but not test it):
>>>
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -1846,7 +1846,7 @@ static void
>>__vmx_deliver_posted_interrupt(struct vcpu *v)
>>>      {
>>>          unsigned int cpu = v->processor;
>>>
>>> -        if ( !test_and_set_bit(VCPU_KICK_SOFTIRQ,
>>&softirq_pending(cpu))
>>> +        if ( !test_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))
>>>               && (cpu != smp_processor_id()) )
>>>              send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
>>>      }
>>
>>While I don't think I fully understand your description, 
>
>Sorry!!
>
>>the line you change
>>here has always been puzzling me: If we were to raise a softirq here, we
>>ought to call cpu_raise_softirq() instead of partly open coding what it does.
>>So I think not marking that softirq pending (but doing this incompletely) is
>>a valid change in any case.
>
>As comments in pi_notification_interrupt()  -- xen/arch/x86/hvm/vmx/vmx.c
>((((
>     *
>     * we need to set VCPU_KICK_SOFTIRQ for the current cpu, just like
>     * __vmx_deliver_posted_interrupt(). So the pending interrupt in PIRR will
>     * be synced to vIRR before VM-Exit in time.
>     *
>))))
>
>I think setting VCPU_KICK_SOFTIRQ bit -- the pending interrupt in PIRR will be 
>synced to vIRR before VM-Exit in time.
>That's also why i said it is better to set the VCPU_KICK_SOFTIRQ bit, but not 
>test it..
>

I think there is a typo. It should be "before VM-Entry in time". It set 
VCPU_KICK_SOFTIRQ bit only to jump to vmx_do_vmentry again instead of
entering guest directly. Jumping to vmx_do_vmentry again can re-sync the PIR to
vIRR in vmx_intr_assist(). In root-mode, cpu treat the pi notification interrupt
as normal interrupt, so cpu will run the interrupt handler 
pi_notification_interrupt()
instead of syncing PIR to vIRR automatically. Receiving a pi notificatio 
interrupt
means some interrupts have been posted in PIR. Setting that bit is to deliver 
these new arrival interrupts before this VM-entry. 

Thanks
chao

>
>>But I'll have to defer to Kevin in the hopes that he fully understands what
>>you explain above as well as him knowing why this was a test-and-set here
>>in the first place.
>>
>
>To me, this test-and-set is a bug.
>
>Quan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.