[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue



On August 22, 2016 6:36 PM, <JBeulich@xxxxxxxx> wrote:
>>>> On 19.08.16 at 14:58, <xuquan8@xxxxxxxxxx> wrote:
>> From 9b2df963c13ad27e2cffbeddfa3267782ac3da2a Mon Sep 17 00:00:00 2001
>> From: Quan Xu <xuquan8@xxxxxxxxxx>
>> Date: Fri, 19 Aug 2016 20:40:31 +0800
>> Subject: [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue
>>
>> When Xen apicv is enabled, wall clock time is faster on Windows7-32 guest
>> with high payload (with 2vCPU, captured from xentrace, in high payload, the
>> count of IPI interrupt increases rapidly between these vCPUs).
>>
>> If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector 0xd1)
>> are both pending (index of bit set in VIRR), unfortunately, the IPI intrrupt
>> is high priority than periodic timer interrupt. Xen updates IPI interrupt bit
>> set in VIRR to guest interrupt status (RVI) as a high priority and apicv
>> (Virtual-Interrupt Delivery) delivers IPI interrupt within VMX non-root
>> operation without a VM exit. Within VMX non-root operation, if periodic timer
>> interrupt index of bit is set in VIRR and highest, the apicv delivers 
>> periodic
>> timer interrupt within VMX non-root operation as well.
>>
>> But in current code, if Xen doesn't update periodic timer interrupt bit set
>> in VIRR to guest interrupt status (RVI) directly, Xen is not aware of this
>> case to decrease the count (pending_intr_nr) of pending periodic timer
>> interrupt,
>> then Xen will deliver a periodic timer interrupt again. The guest receives
>> more
>> periodic timer interrupt.
>>
>> If the periodic timer interrut is delivered and not the highest priority, 
>> make
>> Xen be aware of this case to decrease the count of pending periodic timer
>> interrupt.
>
>I can see the issue you're trying to address, but for one - doesn't
>this lead to other double accounting, namely once the pt irq becomes
>the highest priority one?
>

It is does NOT lead to other double accounting..
As if the pt irq becomes the highest priority one, the intack is pt one..
Then:

 +        else
 +            pt_intr_post(v, intack);


>> --- a/xen/arch/x86/hvm/vmx/intr.c
>> +++ b/xen/arch/x86/hvm/vmx/intr.c
>> @@ -334,7 +334,21 @@ void vmx_intr_assist(void)
>>              __vmwrite(EOI_EXIT_BITMAP(i),
>> v->arch.hvm_vmx.eoi_exit_bitmap[i]);
>>          }
>>
>> -        pt_intr_post(v, intack);
>> +       /*
>> +        * If the periodic timer interrut is delivered and not the highest
>> priority,
>> +        * make Xen be aware of this case to decrease the count of pending
>> periodic
>> +        * timer interrupt.
>> +        */
>> +        if ( pt_vector != -1 && intack.vector > pt_vector )
>> +        {
>> +            struct hvm_intack pt_intack;
>> +
>> +            pt_intack.vector = pt_vector;
>> +            pt_intack.source = hvm_intsrc_lapic;
>> +            pt_intr_post(v, pt_intack);
>> +        }
>> +        else
>> +            pt_intr_post(v, intack);
>
>And then I'm having two problems with this change: Processing other
>than the highest priority irq here looks to be non-architectural. From
>looking over pt_intr_post() there's only pt related accounting and no
>actual interrupt handling there, but I'd like this to be
> (a) confirmed

To me, YES.. but I hope Kevin could confirm it again.

>and (b) called out in the comment.

Agreed. Good idea.

>
>Plus the change above is in no way apicv specific, so I'd also like to
>see clarification why this change is valid (i.e. at least not making
>things worse) even without apicv in use.
>

__iiuc__, it is apicv specific, as these change is under condition 
'cpu_has_vmx_virtual_intr_delivery' (Virtual Interrupt Delivery is one of apicv 
features) as below:

    if ( cpu_has_vmx_virtual_intr_delivery ...) {  ___change___ }
 



>And of course in any event VMX maintainer feedback is going to be
>needed here.


Thanks for your comments!!
Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.