[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] BUG in pv_clock when overflow condition is detected



On 02/20/2012 04:28 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Feb 17, 2012 at 04:25:04PM +0100, Igor Mammedov wrote:
On 02/16/2012 03:03 PM, Avi Kivity wrote:
On 02/15/2012 07:18 PM, Igor Mammedov wrote:
On 02/15/2012 01:23 PM, Igor Mammedov wrote:
   static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time
*shadow)
   {
-    u64 delta = native_read_tsc() - shadow->tsc_timestamp;
+    u64 delta;
+    u64 tsc = native_read_tsc();
+    BUG_ON(tsc<    shadow->tsc_timestamp);
+    delta = tsc - shadow->tsc_timestamp;
       return pvclock_scale_delta(delta, shadow->tsc_to_nsec_mul,
                      shadow->tsc_shift);

Maybe a WARN_ON_ONCE()?  Otherwise a relatively minor hypervisor
bug can
kill the guest.


An attempt to print from this place is not perfect since it often
leads
to recursive calling to this very function and it hang there
anyway.
But if you insist I'll re-post it with WARN_ON_ONCE,
It won't make much difference because guest will hang/stall due
overflow
anyway.

Won't a BUG_ON() also result in a printk?
Yes, it will. But stack will still keep failure point and poking
with crash/gdb at core will always show where it's BUGged.

In case it manages to print dump somehow (saw it couple times from ~
30 test cycles), logs from console or from kernel message buffer
(again poking with gdb) will show where it was called from.

If WARN* is used, it will still totaly screwup clock and
"last value" and system will become unusable, requiring looking with
gdb/crash at the core any way.

So I've just used more stable failure point that will leave trace
everywhere it manages (maybe in console log, but for sure in stack)
in case of WARN it might leave trace on console or not and probably
won't reflect failure point in stack either leaving only kernel
message buffer for clue.


Makes sense.  But do get an ack from the Xen people to ensure this
doesn't break for them.

Konrad, Ian

Could you please review patch form point of view of xen?
Whole thread could be found here https://lkml.org/lkml/2012/2/13/286

What are the conditions under which this happens?
You should probably include that in the git description as well?
This happens on cpu hot-plug in kvm guest:
https://lkml.org/lkml/2012/2/7/222

It probably doesn't affect xen pv guest but issue might affect hvm one.
I'm certainly not xen expert to say it for sure after a cursory look
at the code. If you can confirm that it affects xen hvm I will write
early_percpu_clock_init patch for it as well.

Is this something that happens often?
Very seldom and unlikely.

Hm, so are you asking for review for this patch
I was asking for review of subj patch
  "BUG in pv_clock when overflow condition is detected"
I'll update patch description and re-spin it.

 If there is an overflow can you synthesize a value instead of
crashing the guest?
or for http://www.spinics.net/lists/kvm/msg68440.html ?
Probably could, but there was argument that it is fixing the symptoms
and not the root cause. It seems that you've already found patch that
proposes this "pvclock: Make pv_clock more robust and fixup it if overflow 
happens"


(which would also entail a early_percpu_clock_init implementation
in the Xen code naturally).


--
Thanks,
 Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.