[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 60076: regressions - FAIL



On 29/07/15 15:10, Julien Grall wrote:
> Hi Dario,
> 
> On 29/07/15 10:05, Dario Faggioli wrote:
>> On Wed, 2015-07-29 at 06:42 +0000, osstest service owner wrote:
>>> flight 60076 xen-unstable real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/60076/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>>  test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail REGR. vs. 
>>> 59817
>>>  test-armhf-armhf-xl-multivcpu 14 guest-start.2            fail REGR. vs. 
>>> 59817
>>>
>> I gave a quick look at the logs, and didn't spot any obvious issues.
>>
>> AFAICT, it seems it was actually working:
>>
>> --- ---
>> http://logs.test-lab.xenproject.org/osstest/logs/60076/test-armhf-armhf-xl-multivcpu/serial-arndale-metrocentre.log
>> Jul 28 20:22:21.525058 [  623.706988] device vif2.0 entered promiscuous mode
>>
>> Jul 28 20:22:21.669108 [  623.713782] IPv6: ADDRCONF(NETDEV_UP): vif2.0: 
>> link is not ready
>>
>> Jul 28 20:22:21.677039 [  625.296200] xen-blkback:ring-ref 8, event-channel 
>> 3, protocol 1 (arm-abi) persistent grants
>>
>> Jul 28 20:22:23.261086 [  625.325256] xen-blkback:ring-ref 9, event-channel 
>> 4, protocol 1 (arm-abi) persistent grants
>>
>> Jul 28 20:22:23.293017 [  625.400219] IPv6: ADDRCONF(NETDEV_CHANGE): vif2.0: 
>> link becomes ready
>>
>> Jul 28 20:22:23.365065 [  625.405368] xenbr0: port 2(vif2.0) entered 
>> forwarding state
>>
>> Jul 28 20:22:23.365110 [  625.410948] xenbr0: port 2(vif2.0) entered 
>> forwarding state
>>
>> http://logs.test-lab.xenproject.org/osstest/logs/60076/test-armhf-armhf-xl-multivcpu/arndale-metrocentre---var-log-xen-console-guest-debian.guest.osstest.log
>> INIT: Entering runlevel: 2
>>
>> [info] Using makefile-style concurrent boot in runlevel 2.
>> [....] Starting enhanced syslogd: rsyslogd[?25l[?1c7[ ok 
>> 8[?25h[?0c.
>> [....] Starting periodic command scheduler: cron[?25l[?1c7[ ok 
>> 8[?25h[?0c.
>> [....] Starting OpenBSD Secure Shell server: sshd[?25l[?1c7[ ok 
>> 8[?25h[?0c.
>> 
>>
>> 
>>
>> Debian GNU/Linux 7 debian hvc0
>>
>> debian login: Debian GNU/Linux 7 debian hvc0
>>
>> debian login: 
>>
>> --- ---
>>
>> Can it be that things are "just" slow, since we're creating a 4 vcpus
>> guest on a 1 pcpu (not so powerful, I guess) host?
> 
> The arndale board has a 2 physical CPUs. Although it looks like that the
> secondary cpu is never coming up:
> 
> Jul 28 01:35:39.057076 (XEN) Adding cpu 1 to runqueue 0
> Jul 28 01:35:39.057104 (XEN) Bringing up CPU1
> Jul 28 01:35:39.064998 (XEN) CPU1 never came online
> Jul 28 01:35:40.065133 (XEN) Removing cpu 1 from runqueue 0
> Jul 28 01:35:40.065176 (XEN) Failed to bring up CPU 1 (error -5)
> 
> This has been broken at some point in Xen 4.6. Xen 4.5 is booting with
> the right number of physical on the Arndale.
> 
> Nonetheless, we are aware on the multi-vcpu test failing time to time on
> the arndale. It seems only happen with Xen-unstable.
> 
> osstest is waiting 40s to get the network ready in the guest. When the
> test pass, the osstest is likely waiting ~20s to pass it. I took the
> time between
> 
> guest debian.guest.osstest 5a:36:0e:06:00:20 22 link/ip/tcp: waiting 40s...
> 
> and the first
> 
> executing ssh ... root@xxxxxxxxxxxxxx echo guest debian.guest.osstest: ok
> guest debian.guest.osstest: ok

> For instance see
> http://logs.test-lab.xenproject.org/osstest/logs/59910/test-armhf-armhf-xl-multivcpu/14.ts-guest-start.log

FWIW, there is also worth case where the waiting time very close to 40s
(exactly 38s):

http://logs.test-lab.xenproject.org/osstest/logs/59721/test-armhf-armhf-xl-multivcpu/14.ts-guest-start.log

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.