[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [linux-linus test] 181082: regressions - FAIL


  • To: osstest service owner <osstest-admin@xxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 2 Jun 2023 10:43:54 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OgykzNIkOKeBygkc0tNwHF1BSFTXhBw87Hi0S5ViX7k=; b=Y7EV+hc+fR4jMG3QSSIva4Fl3jZ92vRUep1fgZ68N2JPA711t6BOoRnxvgcpBUKHcYcjUhg7+wyFwJFCoQaUFT6GzhPf1We+UpwFDDdFgiQY1hxKOHasdULwYec+23P05PyDdQE039C4KZRd/9A6MWdagt86HGsr2jmalhAxrU3I8fopQDzDl+2cKU5Ty3f6bAAHJ64KhtJdaeWa1NlA1Q4rvFwddCX7CnYRGWr7fgUfbmXvYh5/Pyu2tFsNcEqK4rbcVSbu3wUC2tpN8kLMl5PVkNymSs+SJ/vcUJZKZXQRT6L1HOSO9lraeiSg4XR8aiL09CxlSzXuu7zYw/cCCA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n4TnGsABnIlGoy2FE8ghVTN0+UO1wg+cDvMbFLxSLGTWk3OehkU7k1/MnMCm9u1tb5P4lrNeDyWu/ds5JRTPfILFif1N9518jzZMjKEyaa1ng+nu9UDqYRMPxokjl9HPMJOjnsziek8F83hLDHLnisg/5tbocsmvwB0syF16CagZijbvcWNp6HZCO2Lh1kZJr+R+qlu6ArEE3KLPjHRiLt4AUKFT8Hv8zYeJ2HlJOmoHc7qh+OxmK9xgZ3612b5h6v4v2rkIm4ry9SAljxH7mnyJ6rpOWy1e4NPojYCduKxgileBI8UubcXnYatq14AcXLLAyfY5HaUnCwE8Q0WesA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Delivery-date: Fri, 02 Jun 2023 08:44:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02.06.2023 05:21, osstest service owner wrote:
> flight 181082 linux-linus real [real]
> flight 181098 linux-linus real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/181082/
> http://logs.test-lab.xenproject.org/osstest/logs/181098/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 
> 180278

Following up from yesterday's discussion, I've noticed only now that
we had an apparently random success once in mid April. Without that,
we'd see ... 

> Tests which did not succeed, but are not blocking:
>  test-armhf-armhf-examine      8 reboot                       fail  like 
> 180278
>  test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 
> 180278
>  test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 
> 180278
>  test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 
> 180278
>  test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 
> 180278
>  test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 
> 180278
>  test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 
> 180278
>  test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 
> 180278
>  test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 
> 180278
>  test-armhf-armhf-libvirt      8 xen-boot                     fail  like 
> 180278
>  test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 
> 180278
>  test-armhf-armhf-xl           8 xen-boot                     fail  like 
> 180278
>  test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 
> 180278
>  test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 
> 180278

that singular test in the same group as all the other armhf ones. I
wonder whether we shouldn't try to get those in sync. Which direction
depends - aiui a force push would allow subsequent automatic pushes
if only the armhf tests fail. Whereas clearing the "fail like" state
for all of them would give a better picture of what's actually
broken right now.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.