[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-4.6-testing test] 137064: regressions - FAIL


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Ian Jackson <ian.jackson@xxxxxxxxxx>
  • Date: Mon, 17 Jun 2019 15:29:38 +0100
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=ian.jackson@xxxxxxxxxx; spf=Pass smtp.mailfrom=Ian.Jackson@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: Juergen Gross <jgross@xxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 17 Jun 2019 14:30:34 +0000
  • Ironport-sdr: sdmCW2PDisJL/ApfvhAgnt56uXfQi/ktnoVxewU7POhkVFyxcjCZVsH4VVEYRfVZPafqYyDluM Qz+olhCh6f3gFICYdpYw8KDToCgvBEpSs42ZqjS+SjQ36ygiJljiHDgODGeS2en0hp4qP9G3Aq 00VsjwidB2MSicyREle6/FkW1MnqSSqXoRKKmF3hm8DnJ1Ayfw8vHRsMkNh3RQTGqh0QrTWH23 e+Ih0HEzEhybD2XoWxnPPjEf4mqotQGQG1WeYuDJm+gekj1JXxNBa76UX/mBqoMND0Lm4Yk6Th IAI=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Jan Beulich writes ("Re: [xen-4.6-testing test] 137064: regressions - FAIL"):
> Fundamentally I don't care overly much about this old tree, but
> I can't figure how you came to the "mostly new tests in XTF"
> conclusion. In fact ...

Hmmm.  I think you are right and I misread the report.

> ... these are all XTF related ones, and leak-check failures imo aren't
> liable to be related to "new" XTF tests. Otoh I think leak-check failures
> are sufficiently "fine" to ignore, and hence aren't an argument against
> a force push.

IIRC the leak-check failures were due to the host crashing during the
XTF tests.  For example,
  
http://logs.test-lab.xenproject.org/osstest/logs/137847/test-xtf-amd64-amd64-1/info.html
shows
  ssh: connect to host 172.16.144.37 port 22: No route to host

Looking at the logs, this seems to be due to the XSA-279 test.

Jun 17 01:16:00.974495 (d96) XSA-279 PoC
...
Jun 17 01:16:01.202545 (XEN) Xen call trace:
Jun 17 01:16:01.202545 (XEN)    [<ffff82d08016a0c6>] flush_area_local+0x6f/0x288
Jun 17 01:16:01.214533 (XEN)    [<ffff82d08018cb14>] flush_area_mask+0x9e/0x135
Jun 17 01:16:01.214533 (XEN)    [<ffff82d0801866a1>] 
__do_update_va_mapping+0x518/0x727
Jun 17 01:16:01.226723 (XEN)    [<ffff82d0801868df>] 
do_update_va_mapping+0x2f/0x62
Jun 17 01:16:01.226805 (XEN)    [<ffff82d080247005>] lstar_enter+0x1a5/0x1ff
...
Jun 17 01:16:01.238545 (XEN) Panic on CPU 3:
Jun 17 01:16:01.238545 (XEN) GENERAL PROTECTION FAULT
Jun 17 01:16:01.250536 (XEN) [error_code=0000]

> I'm far more worried about all these guest install failures - it can't
> really help to ignore them by way of doing a force push. Without
> having looked, quite likely they're (almost) all the same hvmloader
> issue as diagnosed on the 4.7 branch. If so, waiting for the tests
> to actually succeed would seem better to me.

I think you are right about those.

> To give osstest some relief, would it be possible to temporarily
> disable testing of the older trees (which we know won't succeed)?
> They could be incrementally re-enabled from oldest onwards once
> we know the -prev build issues have been addressed in the
> respective N-1 tree.

This is a good idea and I should have done it earlier.

I have now disabled 4.8 and 4.9 inclusive.  I have left 4.7 running
(for which AIUI you have pushed a proposed fix) and also 4.6 (because
I think surely we want to try to make, and test, a fix for the XSA-279
crash, shown above).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.