[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue



On August 22, 2016 5:38 PM, Yang Zhang wrote:
>On 2016/8/19 20:58, Xuquan (Euler) wrote:
>> From 9b2df963c13ad27e2cffbeddfa3267782ac3da2a Mon Sep 17 00:00:00 2001
>> From: Quan Xu <xuquan8@xxxxxxxxxx>
>> Date: Fri, 19 Aug 2016 20:40:31 +0800
>> Subject: [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue
>>
>> When Xen apicv is enabled, wall clock time is faster on Windows7-32
>> guest with high payload (with 2vCPU, captured from xentrace, in high
>> payload, the count of IPI interrupt increases rapidly between these vCPUs).
>>
>> If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector
>> 0xd1) are both pending (index of bit set in VIRR), unfortunately, the
>> IPI intrrupt is high priority than periodic timer interrupt. Xen
>> updates IPI interrupt bit set in VIRR to guest interrupt status (RVI)
>> as a high priority and apicv (Virtual-Interrupt Delivery) delivers IPI
>> interrupt within VMX non-root operation without a VM exit. Within VMX
>> non-root operation, if periodic timer interrupt index of bit is set in
>> VIRR and highest, the apicv delivers periodic timer interrupt within VMX 
>> non-root operation as well.
>>
>> But in current code, if Xen doesn't update periodic timer interrupt
>> bit set in VIRR to guest interrupt status (RVI) directly, Xen is not
>> aware of this case to decrease the count (pending_intr_nr) of pending
>> periodic timer interrupt, then Xen will deliver a periodic timer
>> interrupt again. The guest receives more periodic timer interrupt.
>>
>> If the periodic timer interrut is delivered and not the highest
>> priority, make Xen be aware of this case to decrease the count of
>> pending periodic timer interrupt.
>
>Nice catch!
>
>>
>> Signed-off-by: Yifei Jiang <jiangyifei@xxxxxxxxxx>
>> Signed-off-by: Rongguang He <herongguang.he@xxxxxxxxxx>
>> Signed-off-by: Quan Xu <xuquan8@xxxxxxxxxx>
>> ---
>> Why RFC:
>> 1. I am not quite sure for other cases, such as nested case.
>
>We don't support the nested APICv now. So it is ok for this patch.
>
>> 2. Within VMX non-root operation, an Asynchronous Enclave Exit (including 
>> external
>>    interrupts, non-maskable interrupt system-management interrrupts, 
>> exceptions
>>    and VM exit) may occur before delivery of a periodic timer interrupt, the 
>> periodic
>>    timer interrupt may be lost when a coming periodic timer interrupt is 
>> delivered.
>>    Actually, and so current code is.
>
>I think you mean the combination not interrupt lost. It should be ok since it 
>happens rarely and even native OS will encounter the same problem if it 
>disable the interrupt for too long.
>

Yes, it is 'combination'.

>> ---
>>  xen/arch/x86/hvm/vmx/intr.c | 16 +++++++++++++++-
>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
>> index 8fca08c..d3a034e 100644
>> --- a/xen/arch/x86/hvm/vmx/intr.c
>> +++ b/xen/arch/x86/hvm/vmx/intr.c
>> @@ -334,7 +334,21 @@ void vmx_intr_assist(void)
>>              __vmwrite(EOI_EXIT_BITMAP(i), 
>> v->arch.hvm_vmx.eoi_exit_bitmap[i]);
>>          }
>>
>> -        pt_intr_post(v, intack);
>> +       /*
>> +        * If the periodic timer interrut is delivered and not the highest 
>> priority,
>> +        * make Xen be aware of this case to decrease the count of pending 
>> periodic
>> +        * timer interrupt.
>> +        */
>> +        if ( pt_vector != -1 && intack.vector > pt_vector )
>> +        {
>> +            struct hvm_intack pt_intack;
>> +
>> +            pt_intack.vector = pt_vector;
>> +            pt_intack.source = hvm_intsrc_lapic;
>> +            pt_intr_post(v, pt_intack);
>> +        }
>> +        else
>> +            pt_intr_post(v, intack);
>>      }
>>      else
>>      {
>> --
>> 1.7.12.4
>>
>
>Looks good to me.
>
>Reviewed-by: Yang Zhang <yang.zhang.wz@xxxxxxxxx>
>


Thanks for your comments!!
Quan


>--
>Yang
>Alibaba Cloud Computing

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.