[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.7-testing test] 116240: regressions - FAIL



flight 116240 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116240/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-xsm               6 xen-build      fail in 116219 REGR. vs. 115210

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-1 49 xtf/test-hvm64-lbr-tsx-vmentry fail in 116219 pass 
in 116240
 test-amd64-i386-qemuu-rhel6hvm-intel 12 guest-start/redhat.repeat fail in 
116219 pass in 116240
 test-armhf-armhf-xl-rtds     12 guest-start      fail in 116219 pass in 116240
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 
116219
 test-amd64-amd64-xl-qemut-ws16-amd64 15 guest-saverestore.2 fail pass in 116219

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop        fail REGR. vs. 115210
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop        fail REGR. vs. 115210

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm  1 build-check(1)           blocked in 116219 n/a
 test-armhf-armhf-xl-xsm       1 build-check(1)           blocked in 116219 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 116219 like 115210
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop  fail in 116219 like 115210
 test-xtf-amd64-amd64-5      49 xtf/test-hvm64-lbr-tsx-vmentry fail like 115189
 test-xtf-amd64-amd64-4      49 xtf/test-hvm64-lbr-tsx-vmentry fail like 115189
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 115210
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 115210
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 115210
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 115210
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-check    fail  like 115210
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 115210
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 115210
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-xsm      13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install         fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install        fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-install        fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install         fail never pass

version targeted for testing:
 xen                  259a5c3000d840f244dbb30f2b47b95f2dc0f80f
baseline version:
 xen                  830224431b67fd2afad9bdc532dc1bede20032d5

Last test of basis   115210  2017-10-25 09:01:33 Z   23 days
Testing same since   116219  2017-11-16 11:17:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Eric Chanudet <chanudete@xxxxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Min He <min.he@xxxxxxxxx>
  Yi Zhang <yi.z.zhang@xxxxxxxxx>
  Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          pass    
 build-i386-rumprun                                           pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 pass    
 test-amd64-amd64-xl-qemut-win10-i386                         fail    
 test-amd64-i386-xl-qemut-win10-i386                          fail    
 test-amd64-amd64-xl-qemuu-win10-i386                         fail    
 test-amd64-i386-xl-qemuu-win10-i386                          fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-i386-libvirt-qcow2                                pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 259a5c3000d840f244dbb30f2b47b95f2dc0f80f
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Nov 16 12:03:26 2017 +0100

    x86/shadow: correct SH_LINEAR mapping detection in sh_guess_wrmap()
    
    The fix for XSA-243 / CVE-2017-15592 (c/s bf2b4eadcf379) introduced a change
    in behaviour for sh_guest_wrmap(), where it had to cope with no shadow 
linear
    mapping being present.
    
    As the name suggests, guest_vtable is a mapping of the guests pagetable, not
    Xen's pagetable, meaning that it isn't the pagetable we need to check for 
the
    shadow linear slot in.
    
    The practical upshot is that a shadow HVM vcpu which switches into 4-level
    paging mode, with an L4 pagetable that contains a mapping which aliases 
Xen's
    SH_LINEAR_PT_VIRT_START will fool the safety check for whether a 
SHADOW_LINEAR
    mapping is present.  As the check passes (when it should have failed), Xen
    subsequently falls over the missing mapping with a pagefault such as:
    
        (XEN) Pagetable walk from ffff8140a0503880:
        (XEN)  L4[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L3[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L2[0x102] = 000000046c218063 ffffffffffffffff
        (XEN)  L1[0x103] = 0000000000000000 ffffffffffffffff
    
    This is part of XSA-243.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    master commit: d20daf4294adbdb9316850566013edb98db7bfbc
    master date: 2017-11-16 10:38:14 +0100

commit 1f551847f58c1a029dd17a9aca0b08908a7a445b
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 12:02:55 2017 +0100

    x86: don't wrongly trigger linear page table assertion
    
    _put_page_type() may do multiple iterations until its cmpxchg()
    succeeds. It invokes set_tlbflush_timestamp() on the first
    iteration, however. Code inside the function takes care of this, but
    - the assertion in _put_final_page_type() would trigger on the second
      iteration if time stamps in a debug build are permitted to be
      sufficiently much wider than the default 6 bits (see WRAP_MASK in
      flushtlb.c),
    - it returning -EINTR (for a continuation to be scheduled) would leave
      the page inconsistent state (until the re-invocation completes).
    Make the set_tlbflush_timestamp() invocation conditional, bypassing it
    (for now) only in the case we really can't tolerate the stamp to be
    stored.
    
    This is part of XSA-240.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 2c458dfcb59f3d9d8a35fc5ffbf780b6ed7a26a6
    master date: 2017-11-16 10:37:29 +0100

commit 721c5b3082a1c5c62038401a45b8388bd069e312
Author: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
Date:   Thu Nov 16 12:02:24 2017 +0100

    x86/mm: fix race condition in modify_xen_mappings()
    
    In modify_xen_mappings(), a L1/L2 page table shall be freed,
    if all entries of this page table are empty. Corresponding
    L2/L3 PTE will need be cleared in such scenario.
    
    However, concurrent paging structure modifications on different
    CPUs may cause the L2/L3 PTEs to be already be cleared or set
    to reference a superpage.
    
    Therefore the logic to enumerate the L1/L2 page table and to
    reset the corresponding L2/L3 PTE need to be protected with
    spinlock. And the _PAGE_PRESENT and _PAGE_PSE flags need be
    checked after the lock is obtained.
    
    Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
    master date: 2017-11-14 17:11:26 +0100

commit 33479cdf3005c92b001be424fc51123a6fb96885
Author: Min He <min.he@xxxxxxxxx>
Date:   Thu Nov 16 12:01:57 2017 +0100

    x86/mm: fix race conditions in map_pages_to_xen()
    
    In map_pages_to_xen(), a L2 page table entry may be reset to point to
    a superpage, and its corresponding L1 page table need be freed in such
    scenario, when these L1 page table entries are mapping to consecutive
    page frames and having the same mapping flags.
    
    However, variable `pl1e` is not protected by the lock before L1 page table
    is enumerated. A race condition may happen if this code path is invoked
    simultaneously on different CPUs.
    
    For example, `pl1e` value on CPU0 may hold an obsolete value, pointing
    to a page which has just been freed on CPU1. Besides, before this page
    is reused, it will still be holding the old PTEs, referencing consecutive
    page frames. Consequently the `free_xen_pagetable(l2e_to_l1e(ol2e))` will
    be triggered on CPU0, resulting the unexpected free of a normal page.
    
    This patch fixes the above problem by protecting the `pl1e` with the lock.
    
    Also, there're other potential race conditions. For instance, the L2/L3
    entry may be modified concurrently on different CPUs, by routines such as
    map_pages_to_xen(), modify_xen_mappings() etc. To fix this, this patch will
    check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is obtained,
    for the corresponding L2/L3 entry.
    
    Signed-off-by: Min He <min.he@xxxxxxxxx>
    Signed-off-by: Yi Zhang <yi.z.zhang@xxxxxxxxx>
    Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: a5114662297ad03efc36b52ad365ffa05fb357b7
    master date: 2017-11-14 17:10:56 +0100

commit a8d5690cc3d9b6f16ca4ff608c4d39adf39dd64e
Author: Eric Chanudet <chanudete@xxxxxxxxxxxx>
Date:   Thu Nov 16 12:01:28 2017 +0100

    x86/hvm: do not register hpet mmio during s3 cycle
    
    Do it once at domain creation (hpet_init).
    
    Sleep -> Resume cycles will end up crashing an HVM guest with hpet as
    the sequence during resume takes the path:
    -> hvm_s3_suspend
      -> hpet_reset
        -> hpet_deinit
        -> hpet_init
          -> register_mmio_handler
            -> hvm_next_io_handler
    
    register_mmio_handler will use a new io handler each time, until
    eventually it reaches NR_IO_HANDLERS, then hvm_next_io_handler calls
    domain_crash.
    
    Signed-off-by: Eric Chanudet <chanudete@xxxxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 015d6738ddff4074668c1d4887bbffd507ed1a7f
    master date: 2017-11-14 17:09:50 +0100

commit 227cbb7bfcea21b4ff2b815527afaa513c052ac0
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Thu Nov 16 12:00:28 2017 +0100

    x86/mm: Make PV linear pagetables optional
    
    Allowing pagetables to point to other pagetables of the same level
    (often called 'linear pagetables') has been included in Xen since its
    inception; but recently it has been the source of a number of subtle
    reference-counting bugs.
    
    It is not used by Linux or MiniOS; but it is used by NetBSD and Novell
    Netware.  There are significant numbers of people who are never going
    to use the feature, along with significant numbers who need the
    feature.
    
    Add a Kconfig option for the feature (default to 'y').  Also add a
    command-line option to control whether PV linear pagetables are
    allowed (default to 'true').
    
    NB that we leave linear_pt_count in the page struct.  It's in a union,
    so its presence doesn't increase the size of the data struct.
    Changing the layout of the other elements based on configuration
    options is asking for trouble however; so we'll just leave it there
    and ASSERT that it's zero.
    
    Reported-by: Jann Horn <jannh@xxxxxxxxxx>
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 3285e75dea89afb0ef5b3ee39bd15194bd7cc110
    master date: 2017-10-27 14:36:45 +0100

commit de27faa6e31072333d87cc931bf43ac3ba96ff8b
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:59:55 2017 +0100

    x86: fix asm() constraint for GS selector update
    
    Exception fixup code may alter the operand, which ought to be reflected
    in the constraint.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 65ab53de34851243fb7793ebf12fd92a65f84ddd
    master date: 2017-10-27 13:49:10 +0100

commit f8e806fddc5502350a7e546e69387de46ab1eca4
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:59:21 2017 +0100

    x86: don't latch wrong (stale) GS base addresses
    
    load_segments() writes selector registers before doing any of the base
    address updates. Any of these selector loads can cause a page fault in
    case it references the LDT, and the LDT page accessed was only recently
    installed. Therefore the call tree map_ldt_shadow_page() ->
    guest_get_eff_kern_l1e() -> toggle_guest_mode() would in such a case
    wrongly latch the outgoing vCPU's GS.base into the incoming vCPU's
    recorded state.
    
    Split page table toggling from GS handling - neither
    guest_get_eff_kern_l1e() nor guest_io_okay() need more than the page
    tables being the kernel ones for the memory access they want to do.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: a711f6f24a7157ae70d1cc32e61b98f23dc0c584
    master date: 2017-10-27 13:49:10 +0100

commit a27ed6a9bf0ac7fb4768c8e7234411ebbb0d090b
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:58:47 2017 +0100

    x86: also show FS/GS base addresses when dumping registers
    
    Their state may be important to figure the reason for a crash. To not
    further grow duplicate code, break out a helper function.
    
    I realize that (ab)using the control register array here may not be
    considered the nicest solution, but it seems easier (and less overall
    overhead) to do so compared to the alternative of introducing another
    helper structure.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: be7f60b5a39741eab0a8fea0324f7be0cb724cfb
    master date: 2017-10-24 18:13:13 +0200

commit a82350f7587b83d0b47239edb832a8816a33a77c
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Nov 16 11:58:10 2017 +0100

    x86: fix GS-base-dirty determination
    
    load_segments() writes the two MSRs in their "canonical" positions
    (GS_BASE for the user base, SHADOW_GS_BASE for the kernel one) and uses
    SWAPGS to switch them around if the incoming vCPU is in kernel mode. In
    order to not leave a stale kernel address in GS_BASE when the incoming
    guest is in user mode, the check on the outgoing vCPU needs to be
    dependent upon the mode it is currently in, rather than blindly looking
    at the user base.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 91f85280b9b80852352fcad73d94ed29fafb88da
    master date: 2017-10-24 18:12:31 +0200
(qemu changes not included)

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.