[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen 4.3 test report



On Sat, May 25, 2013 at 12:15:44AM +0400, Vasiliy Tolstov wrote:
> 2013/5/24 George Dunlap <George.Dunlap@xxxxxxxxxxxxx>:
> >
> > Did you mean xm save or xl save?
> 
> 
> In my case xl save crash domU with messages like followind. And domU
> crashes centos 2.6.18 and 2.6.32 (xenlinux) and never 3.8.6 kernel and
> 3.4...

Is the 3.8.6 crashing at the same point?
> 
> [ 1826.587110] PM: late freeze of devices complete after 0.048 msecs
> [ 1826.591220] ------------[ cut here ]------------
> [ 1826.591220] kernel BUG at
> /build/buildd-linux_3.2.41-2-amd64-Wvc92F/linux-3.2.41/drivers/xen/events.c:1489!

That looks to be this 
(https://git.kernel.org/cgit/linux/kernel/git/bwh/linux-3.2.y.git/tree/drivers/xen/events.c)

        if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_virq,
                                                &bind_virq) != 0)
                        BUG();

which is odd. Would you be able to instrument evtchn_bind_virq (this is
in Xen) with some printks, like this (hand't compile tested it):

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 2d7afc9..c109cee 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -270,24 +270,34 @@ static long evtchn_bind_virq(evtchn_bind_virq_t *bind)
     int            port, virq = bind->virq, vcpu = bind->vcpu;
     long           rc = 0;
 
-    if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
+    if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) ) }
+gdprintk(XENLOG_WARNING, "d%dv%d [%s:%d], virq:%d, rc:%ld\n", d->domain_id,
+       vcpu, __func__,__LINE__, virq, -EINVAL);
         return -EINVAL;
-
-    if ( virq_is_global(virq) && (vcpu != 0) )
+    }
+    if ( virq_is_global(virq) && (vcpu != 0) ) {
+gdprintk(XENLOG_WARNING, "d%dv%d [%s:%d], virq_is_global:%d, rc:%ld\n", 
d->domain_id,
+       vcpu, __func__,__LINE__, virq_is_global(virq), -EINVAL);
         return -EINVAL;
-
+    }
     if ( (vcpu < 0) || (vcpu >= d->max_vcpus) ||
-         ((v = d->vcpu[vcpu]) == NULL) )
+         ((v = d->vcpu[vcpu]) == NULL) ) {
+gdprintk(XENLOG_WARNING, "d%dv%d [%s:%d], v:%p, max_vcpus:%d, rc:%ld\n", 
d->domain_id,
+       vcpu, __func__,__LINE__, v, d->max_vcpus, -ENOENT);
         return -ENOENT;
-
+    }
     spin_lock(&d->event_lock);
 
-    if ( v->virq_to_evtchn[virq] != 0 )
+    if ( v->virq_to_evtchn[virq] != 0 ) {
+gdprintk(XENLOG_WARNING, "d%dv%d [%s:%d], v:%p, evtchn:%d, rc:%ld\n", 
d->domain_id,
+       vcpu, __func__,__LINE__, v->virq_to_evtchn[virq] , -EEXIST);
         ERROR_EXIT(-EEXIST);
-
-    if ( (port = get_free_port(d)) < 0 )
+    }
+    if ( (port = get_free_port(d)) < 0 ) {
+gdprintk(XENLOG_WARNING, "d%dv%d [%s:%d], port:%d, rc:%ld\n", d->domain_id,
+       vcpu, __func__,__LINE__, port, port);
         ERROR_EXIT(port);
-
+    }
     chn = evtchn_from_port(d, port);
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.