[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.9-testing baseline-only test] 72463: regressions - FAIL

This run is configured for baseline tests only.

flight 72463 xen-4.9-testing real [real]

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl            7 xen-boot                  fail REGR. vs. 72352
 test-amd64-amd64-migrupgrade 10 xen-boot/src_host         fail REGR. vs. 72352
 test-amd64-amd64-migrupgrade 11 xen-boot/dst_host         fail REGR. vs. 72352
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot            fail REGR. vs. 72352
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-boot          fail REGR. vs. 72352
 test-armhf-armhf-xl-xsm      19 leak-check/check          fail REGR. vs. 72352
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win10-i386 17 guest-stop       fail blocked in 72352
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install         fail like 72352
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail like 72352
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 72352
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10  fail like 72352
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail like 72352
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-install        fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install         fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install         fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install        fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-install        fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10  fail never pass

version targeted for testing:
 xen                  d6ce860bbdf9dbdc88e4f2692e16776a622b2949
baseline version:
 xen                  61b6df9d821481ba4e26e5843aa9320345077319

Last test of basis    72352  2017-10-26 05:56:41 Z   23 days
Testing same since    72463  2017-11-17 17:46:53 Z    0 days    1 attempts

People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  David Esler <drumandstrum@xxxxxxxxx>
  Eric Chanudet <chanudete@xxxxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Min He <min.he@xxxxxxxxx>
  Yi Zhang <yi.z.zhang@xxxxxxxxx>
  Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>

 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          pass    
 build-i386-rumprun                                           pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 pass    
 test-amd64-amd64-xl-qemut-win10-i386                         fail    
 test-amd64-i386-xl-qemut-win10-i386                          fail    
 test-amd64-amd64-xl-qemuu-win10-i386                         fail    
 test-amd64-i386-xl-qemuu-win10-i386                          fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-armhf-armhf-xl-midway                                   pass    
 test-amd64-amd64-migrupgrade                                 fail    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-i386-libvirt-qcow2                                pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    

sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at

Test harness code can be found at

Push not applicable.

commit d6ce860bbdf9dbdc88e4f2692e16776a622b2949
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Nov 16 11:47:46 2017 +0100

    x86/shadow: correct SH_LINEAR mapping detection in sh_guess_wrmap()
    The fix for XSA-243 / CVE-2017-15592 (c/s bf2b4eadcf379) introduced a change
    in behaviour for sh_guest_wrmap(), where it had to cope with no shadow 
    mapping being present.
    As the name suggests, guest_vtable is a mapping of the guests pagetable, not
    Xen's pagetable, meaning that it isn't the pagetable we need to check for 
    shadow linear slot in.
    The practical upshot is that a shadow HVM vcpu which switches into 4-level
    paging mode, with an L4 pagetable that contains a mapping which aliases 
    SH_LINEAR_PT_VIRT_START will fool the safety check for whether a 
    mapping is present.  As the check passes (when it should have failed), Xen
    subsequently falls over the missing mapping with a pagefault such as:
        (XEN) Pagetable walk from ffff8140a0503880:
        (XEN)  L4[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L3[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L2[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L1[0x103] = 0000000000000000 ffffffffffffffff
    This is part of XSA-243.
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    master commit: d20daf4294adbdb9316850566013edb98db7bfbc
    master date: 2017-11-16 10:38:14 +0100

commit 2098a2d8fe486952e676f20099590458f731af75
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:47:13 2017 +0100

    x86: don't wrongly trigger linear page table assertion
    _put_page_type() may do multiple iterations until its cmpxchg()
    succeeds. It invokes set_tlbflush_timestamp() on the first
    iteration, however. Code inside the function takes care of this, but
    - the assertion in _put_final_page_type() would trigger on the second
      iteration if time stamps in a debug build are permitted to be
      sufficiently much wider than the default 6 bits (see WRAP_MASK in
    - it returning -EINTR (for a continuation to be scheduled) would leave
      the page inconsistent state (until the re-invocation completes).
    Make the set_tlbflush_timestamp() invocation conditional, bypassing it
    (for now) only in the case we really can't tolerate the stamp to be
    This is part of XSA-240.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 2c458dfcb59f3d9d8a35fc5ffbf780b6ed7a26a6
    master date: 2017-11-16 10:37:29 +0100

commit ddfca4005697afd6169153a817bcb527c1520078
Author: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
Date:   Thu Nov 16 11:46:38 2017 +0100

    x86/mm: fix race condition in modify_xen_mappings()
    In modify_xen_mappings(), a L1/L2 page table shall be freed,
    if all entries of this page table are empty. Corresponding
    L2/L3 PTE will need be cleared in such scenario.
    However, concurrent paging structure modifications on different
    CPUs may cause the L2/L3 PTEs to be already be cleared or set
    to reference a superpage.
    Therefore the logic to enumerate the L1/L2 page table and to
    reset the corresponding L2/L3 PTE need to be protected with
    spinlock. And the _PAGE_PRESENT and _PAGE_PSE flags need be
    checked after the lock is obtained.
    Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
    master date: 2017-11-14 17:11:26 +0100

commit 80eeaab09a1134f34c3c6912a8acb0c43c7c3ef7
Author: Min He <min.he@xxxxxxxxx>
Date:   Thu Nov 16 11:46:07 2017 +0100

    x86/mm: fix race conditions in map_pages_to_xen()
    In map_pages_to_xen(), a L2 page table entry may be reset to point to
    a superpage, and its corresponding L1 page table need be freed in such
    scenario, when these L1 page table entries are mapping to consecutive
    page frames and having the same mapping flags.
    However, variable `pl1e` is not protected by the lock before L1 page table
    is enumerated. A race condition may happen if this code path is invoked
    simultaneously on different CPUs.
    For example, `pl1e` value on CPU0 may hold an obsolete value, pointing
    to a page which has just been freed on CPU1. Besides, before this page
    is reused, it will still be holding the old PTEs, referencing consecutive
    page frames. Consequently the `free_xen_pagetable(l2e_to_l1e(ol2e))` will
    be triggered on CPU0, resulting the unexpected free of a normal page.
    This patch fixes the above problem by protecting the `pl1e` with the lock.
    Also, there're other potential race conditions. For instance, the L2/L3
    entry may be modified concurrently on different CPUs, by routines such as
    map_pages_to_xen(), modify_xen_mappings() etc. To fix this, this patch will
    check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is obtained,
    for the corresponding L2/L3 entry.
    Signed-off-by: Min He <min.he@xxxxxxxxx>
    Signed-off-by: Yi Zhang <yi.z.zhang@xxxxxxxxx>
    Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: a5114662297ad03efc36b52ad365ffa05fb357b7
    master date: 2017-11-14 17:10:56 +0100

commit a0bc38e063a1fbade2f796e862aade6c5a68407a
Author: Eric Chanudet <chanudete@xxxxxxxxxxxx>
Date:   Thu Nov 16 11:45:38 2017 +0100

    x86/hvm: do not register hpet mmio during s3 cycle
    Do it once at domain creation (hpet_init).
    Sleep -> Resume cycles will end up crashing an HVM guest with hpet as
    the sequence during resume takes the path:
    -> hvm_s3_suspend
      -> hpet_reset
        -> hpet_deinit
        -> hpet_init
          -> register_mmio_handler
            -> hvm_next_io_handler
    register_mmio_handler will use a new io handler each time, until
    eventually it reaches NR_IO_HANDLERS, then hvm_next_io_handler calls
    Signed-off-by: Eric Chanudet <chanudete@xxxxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 015d6738ddff4074668c1d4887bbffd507ed1a7f
    master date: 2017-11-14 17:09:50 +0100

commit 2224080ea1a80220c1386cdf2f757fdaaecd8da6
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Thu Nov 16 11:44:58 2017 +0100

    x86/mm: Make PV linear pagetables optional
    Allowing pagetables to point to other pagetables of the same level
    (often called 'linear pagetables') has been included in Xen since its
    inception; but recently it has been the source of a number of subtle
    reference-counting bugs.
    It is not used by Linux or MiniOS; but it is used by NetBSD and Novell
    Netware.  There are significant numbers of people who are never going
    to use the feature, along with significant numbers who need the
    Add a Kconfig option for the feature (default to 'y').  Also add a
    command-line option to control whether PV linear pagetables are
    allowed (default to 'true').
    NB that we leave linear_pt_count in the page struct.  It's in a union,
    so its presence doesn't increase the size of the data struct.
    Changing the layout of the other elements based on configuration
    options is asking for trouble however; so we'll just leave it there
    and ASSERT that it's zero.
    Reported-by: Jann Horn <jannh@xxxxxxxxxx>
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 3285e75dea89afb0ef5b3ee39bd15194bd7cc110
    master date: 2017-10-27 14:36:45 +0100

commit 533b9e4fbaa9e98d978cf8322721dcd222caaef2
Author: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
Date:   Thu Nov 16 11:44:14 2017 +0100

    x86/vpmu: Remove unnecessary call to do_interrupt()
    This call was left during PVHv1 removal (commit 33e5c32559e1 ("x86:
    remove PVHv1 code")):
    -        if ( is_pvh_vcpu(sampling) &&
    -             !(vpmu_mode & XENPMU_MODE_ALL) &&
    +        if ( !(vpmu_mode & XENPMU_MODE_ALL) &&
                  !vpmu->arch_vpmu_ops->do_interrupt(regs) )
    As result of this extra call VPMU no longer works for PV guests on Intel
    because we effectively lose value of MSR_CORE_PERF_GLOBAL_STATUS.
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 5e2bfc23f7c9a60c01a02c619e1f3d7456ce0e93
    master date: 2017-10-27 14:32:38 +0100

commit f8732452d2d4d740455f237f3d9fd14ec923279f
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:43:40 2017 +0100

    x86: fix asm() constraint for GS selector update
    Exception fixup code may alter the operand, which ought to be reflected
    in the constraint.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 65ab53de34851243fb7793ebf12fd92a65f84ddd
    master date: 2017-10-27 13:49:10 +0100

commit 6453a6a3f27b07cb6597b24816353246ff5ec4e8
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:43:07 2017 +0100

    x86: don't latch wrong (stale) GS base addresses
    load_segments() writes selector registers before doing any of the base
    address updates. Any of these selector loads can cause a page fault in
    case it references the LDT, and the LDT page accessed was only recently
    installed. Therefore the call tree map_ldt_shadow_page() ->
    guest_get_eff_kern_l1e() -> toggle_guest_mode() would in such a case
    wrongly latch the outgoing vCPU's GS.base into the incoming vCPU's
    recorded state.
    Split page table toggling from GS handling - neither
    guest_get_eff_kern_l1e() nor guest_io_okay() need more than the page
    tables being the kernel ones for the memory access they want to do.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: a711f6f24a7157ae70d1cc32e61b98f23dc0c584
    master date: 2017-10-27 13:49:10 +0100

commit 1588e534c240e55730d3a0299b53cc723edfa48c
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:42:16 2017 +0100

    x86: also show FS/GS base addresses when dumping registers
    Their state may be important to figure the reason for a crash. To not
    further grow duplicate code, break out a helper function.
    I realize that (ab)using the control register array here may not be
    considered the nicest solution, but it seems easier (and less overall
    overhead) to do so compared to the alternative of introducing another
    helper structure.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: be7f60b5a39741eab0a8fea0324f7be0cb724cfb
    master date: 2017-10-24 18:13:13 +0200

commit df07ad1315e4d91f758b1e1e9b3cbd393146956f
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:41:44 2017 +0100

    x86: fix GS-base-dirty determination
    load_segments() writes the two MSRs in their "canonical" positions
    (GS_BASE for the user base, SHADOW_GS_BASE for the kernel one) and uses
    SWAPGS to switch them around if the incoming vCPU is in kernel mode. In
    order to not leave a stale kernel address in GS_BASE when the incoming
    guest is in user mode, the check on the outgoing vCPU needs to be
    dependent upon the mode it is currently in, rather than blindly looking
    at the user base.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 91f85280b9b80852352fcad73d94ed29fafb88da
    master date: 2017-10-24 18:12:31 +0200

commit 71648cba2671b108555d11d3810fbc71107e202c
Author: David Esler <drumandstrum@xxxxxxxxx>
Date:   Thu Nov 16 11:40:37 2017 +0100

    x86/boot: fix early error output
    In 9180f5365524 a change was made to the send_chr function to take in
    C-strings and output a character at a time until a NULL was encountered.
    However, when there is no VGA there is no code to increment the current
    character position resulting in an endless loop of the first character.
    This moves the (implicit) increment such that it occurs in all cases.
    Signed-off-by: David Esler <drumandstrum@xxxxxxxxx>
    Reviewed-by: Doug Goldstein <cardoe@xxxxxxxxxx>
    [jb: correct title and description]
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx>
    master commit: 78e693cc123296db2f79e792cf474544c1ffd064
    master date: 2017-10-20 09:29:29 +0200
(qemu changes not included)

osstest-output mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.