[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.5-testing test] 93905: regressions - FAIL



flight 93905 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/93905/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   5 xen-build                 fail REGR. vs. 92345
 build-amd64-pvops             5 kernel-build              fail REGR. vs. 92345
 build-amd64-prev              5 xen-build                 fail REGR. vs. 92345

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     11 guest-start                  fail   like 92182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)               blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)               blocked  n/a
 build-amd64-rumpuserxen       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 build-check(1)               blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-pvh-amd   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 build-check(1)               blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvh-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      10 guest-start                  fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 10 guest-start                  fail never pass
 test-armhf-armhf-libvirt-raw 10 guest-start                  fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  065b1347b0902fd68291ddc593a0055259793383
baseline version:
 xen                  c70ab649fcde2f0c3d750d35f5e2b77d619ba80b

Last test of basis    92345  2016-04-22 10:56:14 Z   17 days
Testing same since    93905  2016-05-09 11:39:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Mike Meyer <mike.meyer@xxxxxxxxxxxx>
  Mike Meyer Mon Apr 4 15:02:59 2016 +0200 <mike.meyer@xxxxxxxxxxxx>
  Olaf Hering <olaf@xxxxxxxxx>
  Stefano Stabellini <sstabellini@xxxxxxxxxx>
  Tim Deegan <tim@xxxxxxx>
  Wei Liu <wei.liu2@xxxxxxxxxx>

jobs:
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             fail    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumpuserxen                                      blocked 
 build-i386-rumpuserxen                                       pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvh-amd                                  blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-rumpuserxen-amd64                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-i386-rumpuserxen-i386                             blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvh-intel                                blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-i386-xl-qemut-winxpsp3                            blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xl-qemuu-winxpsp3                            blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 065b1347b0902fd68291ddc593a0055259793383
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Mon May 9 13:16:10 2016 +0200

    x86/shadow: account for ioreq server pages before complaining about not 
found mapping
    
    prepare_ring_for_helper(), just like share_xen_page_with_guest(),
    takes a write reference on the page, and hence should similarly be
    accounted for when determining whether to log a complaint.
    
    This requires using recursive locking for the ioreq server lock, as the
    offending invocation of sh_remove_all_mappings() is down the call stack
    from hvm_set_ioreq_server_state(). (While not strictly needed to be
    done in all other instances too, convert all of them for consistency.)
    
    At once improve the usefulness of the shadow error message: Log all
    values involved in triggering it as well as the GFN (to aid
    understanding which guest page it is that there is a problem with - in
    cases like the one here the GFN is invariant across invocations, while
    the MFN obviously can [and will] vary).
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    master commit: 77eb5dbeff78bbe549793325520f59ab46a187f8
    master date: 2016-05-02 09:20:17 +0200

commit f9cc40e87768ccb80bd10a106e300848ac532067
Author: Jan Beulich <JBeulich@xxxxxxxx>
Date:   Mon May 9 13:15:14 2016 +0200

    x86/time: fix gtime_to_gtsc for vtsc=1 PV guests
    
    For vtsc=1 PV guests, rdtsc is trapped and calculated from get_s_time()
    using gtime_to_gtsc. Similarly the tsc_timestamp, part of struct
    vcpu_time_info, is calculated from stime_local_stamp using
    gtime_to_gtsc.
    
    However gtime_to_gtsc can return 0, if time < vtsc_offset, which can
    actually happen when gtime_to_gtsc is called passing stime_local_stamp
    (the caller function is __update_vcpu_system_time).
    
    In that case the pvclock protocol doesn't work properly and the guest is
    unable to calculate the system time correctly. As a consequence when the
    guest tries to set a timer event (for example calling the
    VCPUOP_set_singleshot_timer hypercall), the event will be in the past
    causing Linux to hang.
    
    The purpose of the pvclock protocol is to allow the guest to calculate
    the system_time in nanosec correctly. The guest calculates as follow:
    
      from_vtsc_scale(rdtsc - vcpu_time_info.tsc_timestamp) + 
vcpu_time_info.system_time
    
    Given that with vtsc=1:
      rdtsc = to_vtsc_scale(NOW() - vtsc_offset)
      vcpu_time_info.tsc_timestamp = to_vtsc_scale(vcpu_time_info.system_time - 
vtsc_offset)
    
    The expression evaluates to NOW(), which is what we want.  However when
    stime_local_stamp < vtsc_offset, vcpu_time_info.tsc_timestamp is
    actually 0. As a consequence the calculated overall system_time is not
    correct.
    
    This patch fixes the issue by letting gtime_to_gtsc return a negative
    integer in the form of a wrapped around unsigned integer, thus when the
    guest subtracts vcpu_time_info.tsc_timestamp from rdtsc will calculate
    the right value.
    
    Signed-off-by: Jan Beulich <JBeulich@xxxxxxxx>
    Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: d22c9bf7c3067b17cbd9cdfd8b81941dd6fb8d77
    master date: 2016-04-28 15:06:56 +0200

commit becb125a25859f42ec40a156ca1aa138f5d6fdd7
Author: Mike Meyer Mon Apr 4 15:02:59 2016 +0200 <mike.meyer@xxxxxxxxxxxx>
Date:   Mon Apr 4 15:02:59 2016 +0200

    unmodified_drivers: enable use of register_oldmem_pfn_is_ram() API
    
    Git: a0f793d82d5ec2d0b67c57d7130bf01c91396c60
    
    During the investigation of very slow dump times of guest images in
    Amazon EC2 instance, it was discovered that the
    register_oldmem_pfn_is_ram() API implemented by the upstream kernel
    commit 997c136f518c5debd63847e78e2a8694f56dcf90:
    
            fs/proc/vmcore.c: add hook to read_from_oldmem() to check
                               for non-ram pages
    
    was not being called.  This was due to the PV driver with the call
    to register_oldmem_pfn_is_ram() API was not including the
    kernel header file that is used to communicate support of the API in the
    kernel.  Fix the issue by including the required header file.
    
    Signed-off-by: Mike Meyer <mike.meyer@xxxxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Olaf Hering <olaf@xxxxxxxxx>

commit 0aabc2855c9e10b61ceb75a6b25ecc6d467e99e5
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Mon May 9 13:05:42 2016 +0200

    x86/HVM: fix forwarding of internally cached requests
    
    Forwarding entire batches to the device model when an individual
    iteration of them got rejected by internal device emulation handlers
    with X86EMUL_UNHANDLEABLE is wrong: The device model would then handle
    all iterations, without the internal handler getting to see any past
    the one it returned failure for. This causes misbehavior in at least
    the MSI-X and VGA code, which want to see all such requests for
    internal tracking/caching purposes. But note that this does not apply
    to buffered I/O requests.
    
    This in turn means that the condition in hvm_process_io_intercept() of
    when to crash the domain was wrong: Since X86EMUL_UNHANDLEABLE can
    validly be returned by the individual device handlers, we mustn't
    blindly crash the domain if such occurs on other than the initial
    iteration. Instead we need to distinguish hvm_copy_*_guest_phys()
    failures from device specific ones, and then the former need to always
    be fatal to the domain (i.e. also on the first iteration), since
    otherwise we again would end up forwarding a request to qemu which the
    internal handler didn't get to see.
    
    The adjustment should be okay even for stdvga's MMIO handling:
    - if it is not caching then the accept function would have failed so we
      won't get into hvm_process_io_intercept(),
    - if it issued the buffered ioreq then we only get to the p->count
      reduction if hvm_send_ioreq() actually encountered an error (in which
      we don't care about the request getting split up).
    
    Also commit 4faffc41d ("x86/hvm: limit reps to avoid the need to handle
    retry") went too far in removing code from hvm_process_io_intercept():
    When there were successfully handled iterations, the function should
    continue to return success with a clipped repeat count.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    
    x86/HVM: fix forwarding of internally cached requests (part 2)
    
    Commit 96ae556569 ("x86/HVM: fix forwarding of internally cached
    requests") wasn't quite complete: hvmemul_do_io() also needs to
    propagate up the clipped count. (I really should have re-tested the
    forward port resulting in the earlier change, instead of relying on the
    testing done on the older version of Xen which the fix was first needed
    for.)
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 96ae556569b8eaedc0bb242932842c3277b515d8
    master date: 2016-03-31 14:52:04 +0200
    master commit: 670ee15ac1e3de7c15381fdaab0e531489b48939
    master date: 2016-04-28 15:09:26 +0200

commit 12acca51313d3582968287e2dd3d7498cb02a7ce
Author: David Vrabel <david.vrabel@xxxxxxxxxx>
Date:   Mon May 9 13:05:13 2016 +0200

    x86/fpu: improve check for XSAVE* not writing FIP/FDP fields
    
    The hardware may not write the FIP/FDP fields with a XSAVE*
    instruction.  e.g., with XSAVEOPT/XSAVES if the state hasn't changed
    or on AMD CPUs when a floating point exception is not pending.  We
    need to identify this case so we can correctly apply the check for
    whether to save/restore FCS/FDS.
    
    By poisoning FIP in the saved state we can check if the hardware
    writes to this field.  The poison value is both: a) non-canonical; and
    b) random with a vanishingly small probability of matching a value
    written by the hardware (1 / (2^63) = 10^-19).
    
    The poison value is fixed and thus knowable by a guest (or guest
    userspace).  This could allow the guest to cause Xen to incorrectly
    detect that the field has not been written.  But: a) this requires the
    FIP register to be a full 64 bits internally which is not the case for
    all current AMD and Intel CPUs; and b) this only allows the guest (or
    a guest userspace process) to corrupt its own state (i.e., it cannot
    affect the state of another guest or another user space process).
    
    This results in smaller code with fewer branches and is more
    understandable.
    
    Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
    
    Intel confirmed that 64-bit {F,}XRSTOR sign-extend FIP from bit 47.
    While leaving the description above intact, modify the code comment
    accordingly.
    
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: e869abd77aa32fb0a5212d34ae954e4dbcb8f7a5
    master date: 2016-03-18 09:49:01 +0100

commit 9945f6230f83a3869526bec6e5eb865098e79c05
Author: David Vrabel <david.vrabel@xxxxxxxxxx>
Date:   Mon May 9 13:04:26 2016 +0200

    x86/hvm: add HVM_PARAM_X87_FIP_WIDTH
    
    The HVM parameter HVM_PARAM_X87_FIP_WIDTH to allow tools and the guest
    to adjust the width of the FIP/FDP registers to be saved/restored by
    the hypervisor.  This is in case the hypervisor hueristics do not do
    the right thing.
    
    Add this parameter to the set saved during domain save/migrate.
    
    Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    master commit: 5d768fb1f3f7b011e7b6e75909c7f4841730de60
    master date: 2016-02-26 12:30:11 +0100

commit 38eee32060212bf6fe363e334e340465ec0a870a
Author: David Vrabel <david.vrabel@xxxxxxxxxx>
Date:   Mon May 9 13:03:15 2016 +0200

    x86/fpu: add a per-domain field to set the width of FIP/FDP
    
    The x86 architecture allows either: a) the 64-bit FIP/FDP registers to
    be restored (clearing FCS and FDS); or b) the 32-bit FIP/FDP and
    FCS/FDS registers to be restored (clearing the upper 32-bits).
    
    Add a per-domain field to indicate which of these options a guest
    needs.  The options are: 8, 4 or 0.  Where 0 indicates that the
    hypervisor should automatically guess the FIP width by checking the
    value of FIP/FDP when saving the state (this is the existing
    behaviour).
    
    The FIP width is initially automatic but is set explicitly in the
    following cases:
    
    - 32-bit PV guest: 4
    - Newer CPUs that do not save FCS/FDS: 8
    
    The x87_fip_width field is placed into an existing 1 byte hole in
    struct arch_domain.
    
    Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
    
    Fix build.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 879b44b041f26de35e4b527bf0f3c361eb52bd82
    master date: 2016-02-26 12:29:21 +0100
(qemu changes not included)

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.