[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH]Fixes for overflowed calculation in vHPET



Hi, Keir,
 
I think the one of the following code should be changed. How do you think about it?
 
#define hpet_tick_to_ns(h, tick)                        \
    ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
-        ~0ULL : (tick) * (h)->hpet_to_ns_scale) >> 10))
+       ~0ULL >> 1 : (tick) * (h)->hpet_to_ns_scale) >> 10))
 
Or we can make changes here:
-    h->hpet_to_ns_limit = (~0ULL >> 1) / h->hpet_to_ns_scale;
+    h->hpet_to_ns_limit = ~0ULL / h->hpet_to_ns_scale;
 
BTW: Sorry I did not see you already checked in the two patches when I composed my last mail.

Best Regards
Haitao Shan

 


From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Shan, Haitao
Sent: 2008年1月9日 17:42
To: Keir Fraser; xen-devel@xxxxxxxxxxxxxxxxxxx
Cc: Mark McLoughlin; Cui, Dexuan
Subject: RE: [Xen-devel] [PATCH]Fixes for overflowed calculation in vHPET

Yes. That's why I say normally it is OK.
In fact, this change is in close relation with another patch I sent (you can see the attached). Correct behavior of HPET should be that maincounter and timer are all enabled when HPET is globally enabled. And the timer period following a reset is 0xffff_ffff_ffff_ffff. If guest just enables HPET to use maincounter, that large value will be used to set timer to update the status. At that time, the period will be forced to 0.
Current vHPET uses per timer interrupt control bit as per timer enable control bit. And timer interrupts are disable by default. So, luckily the above scheme won't happen in current implementation, since that large value won't be used to set timer.
 
As long as no one uses HPET like that, I think there is no problem and the patch can be ignored. The question is whether we should make device model strictly following the specifications, given that current vHPET does not.

Best Regards
Haitao Shan

 


From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Keir Fraser
Sent: 2008年1月9日 16:37
To: Shan, Haitao; xen-devel@xxxxxxxxxxxxxxxxxxx
Cc: Mark McLoughlin; Cui, Dexuan
Subject: Re: [Xen-devel] [PATCH]Fixes for overflowed calculation in vHPET

It sounds like a theoretical problem to me. You’d have to set the period, or single-shot timeout, to many years to have it wrap around in the 64th bit and appear negative. Noone will do that.

 -- Keir

On 9/1/08 01:19, "Shan, Haitao" <haitao.shan@xxxxxxxxx> wrote:

I think it is OK for normal usage and for 32bit timer operation.
But if a timer is programmed at 64bit mode, and the period programmed is sufficiently large, say 0xf000_0000_0000_0000, the code introduces trouble. Actually the timer should never be fired. However, (int64_s)0xf000_0000_0000_0000 < 0, then the period is forced to 0 and the timer is fired immediately.
Best Regards
Haitao Shan

 


From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx]
Sent: 2008年1月8日 22:15
To: Shan, Haitao; xen-devel@xxxxxxxxxxxxxxxxxxx
Cc: Mark McLoughlin; Cui, Dexuan
Subject: Re: [Xen-devel] [PATCH]Fixes for overflowed calculation in vHPET

On 4/1/08 03:21, "Shan, Haitao" <haitao.shan@xxxxxxxxx> wrote:

This patch will  fix the bugs in hpet_set_timer. Currently in hpet_tick_to_ns, the approach is  multiplying first, which easily causes overflow when tick is quite large. The  patch cannot handle arbitrate large ticks duo to the precision requirement and  64bit's value range. But by optimize the equation, a larger ticks than current  code can be supported. Also an overflow check is added before the calculation.  
This patch will also fix the wrong handling of wrap around case when timer  is in 64bit mode.
 

What’s wrong with the handling of the wrap-around case? It looks okay to me.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.