[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen-4.6-testing test] 114422: regressions - FAIL



flight 114422 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/114422/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-4 48 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 114097
 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail REGR. vs. 114097

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds   16 guest-start/debian.repeat fail blocked in 114097
 test-xtf-amd64-amd64-3      48 xtf/test-hvm64-lbr-tsx-vmentry fail like 114097
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 114097
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-check    fail  like 114097
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 114097
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 114097
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 114097
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 114097
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 114097
 test-xtf-amd64-amd64-3       72 xtf/test-pv32pae-xsa-194     fail   never pass
 test-xtf-amd64-amd64-4       72 xtf/test-pv32pae-xsa-194     fail   never pass
 test-xtf-amd64-amd64-5       72 xtf/test-pv32pae-xsa-194     fail   never pass
 test-xtf-amd64-amd64-1       72 xtf/test-pv32pae-xsa-194     fail   never pass
 test-xtf-amd64-amd64-2       72 xtf/test-pv32pae-xsa-194     fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install        fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-install        fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-check        fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore       fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore       fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install         fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install         fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-install        fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install        fail never pass
 test-armhf-armhf-xl-xsm      13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-xsm      14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  aad5a67587b493e2478e1e46f71404c3dd41a937
baseline version:
 xen                  78fd0c3765cf89befb2338ac342a0c8a3e29ba3d

Last test of basis   114097  2017-10-07 12:28:11 Z    6 days
Testing same since   114422  2017-10-12 14:11:19 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Tim Deegan <tim@xxxxxxx>
  Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          pass    
 build-i386-rumprun                                           pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 pass    
 test-amd64-amd64-xl-qemut-win10-i386                         fail    
 test-amd64-i386-xl-qemut-win10-i386                          fail    
 test-amd64-amd64-xl-qemuu-win10-i386                         fail    
 test-amd64-i386-xl-qemuu-win10-i386                          fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-i386-libvirt-qcow2                                pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit aad5a67587b493e2478e1e46f71404c3dd41a937
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Oct 12 15:41:57 2017 +0200

    x86/cpu: Fix IST handling during PCPU bringup
    
    Clear IST references in newly allocated IDTs.  Nothing good will come of
    having them set before the TSS is suitably constructed (although the chances
    of the CPU surviving such an IST interrupt/exception is extremely slim).
    
    Uniformly set the IST references after the TSS is in place.  This fixes an
    issue on AMD hardware, where onlining a PCPU while PCPU0 is in HVM context
    will cause IST_NONE to be copied into the new IDT, making that PCPU 
vulnerable
    to privilege escalation from PV guests until it subsequently schedules an 
HVM
    guest.
    
    This is XSA-244.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: cc08c73c8c1f5ba5ed0f8274548db6725e1c3157
    master date: 2017-10-12 14:50:31 +0200

commit d8b0ebfc1d1e9f59393cc3c11584c01712d6024b
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Oct 12 15:41:31 2017 +0200

    x86/shadow: Don't create self-linear shadow mappings for 4-level translated 
guests
    
    When initially creating a monitor table for 4-level translated guests, don't
    install a shadow-linear mapping.  This mapping is actually self-linear, and
    trips up the writeable heuristic logic into following Xen's mappings, not 
the
    guests' shadows it was expecting to follow.
    
    A consequence of this is that sh_guess_wrmap() needs to cope with there 
being
    no shadow-linear mapping present, which in practice occurs once each time a
    vcpu switches to 4-level paging from a different paging mode.
    
    An appropriate shadow-linear slot will be inserted into the monitor table
    either while constructing lower level monitor tables, or by sh_update_cr3().
    
    While fixing this, clarify the safety of the other mappings.  Despite
    appearing unsafe, it is correct to create a guest-linear mapping for
    translated domains; this is self-linear and doesn't point into the 
translated
    domain.  Drop a dead clause for translate != external guests.
    
    This is XSA-243.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    master commit: bf2b4eadcf379d0361b38de9725ea5f7a18a5205
    master date: 2017-10-12 14:50:07 +0200

commit f0208a4eb33f7a13cf0319e49e6803d03b5b2793
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Oct 12 15:40:59 2017 +0200

    x86: Disable the use of auto-translated PV guests
    
    This is a minimal backport of c/s 92942fd3d469 "x86/mm: drop
    guest_{map,get_eff}_l1e() hooks" from Xen 4.7, which stated:
    
      Disallow the unmaintained and presumed broken translated-but-not-external
      paging mode combination ...
    
    It turns out that this mode is insecure to run with, as opposed to just 
simply
    broken.
    
    This is part of XSA-243.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

commit 42b2c82081fa2bc9b7fe37c8ae69687a3f5e91fb
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:40:04 2017 +0200

    x86: don't allow page_unlock() to drop the last type reference
    
    Only _put_page_type() does the necessary cleanup, and hence not all
    domain pages can be released during guest cleanup (leaving around
    zombie domains) if we get this wrong.
    
    This is XSA-242.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 6410733a8a0dff2fe581338ff631670cf91889db
    master date: 2017-10-12 14:49:46 +0200

commit 57318e1cf7a9b6c2cfb791b25124451ef493cd01
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:39:31 2017 +0200

    x86: don't store possibly stale TLB flush time stamp
    
    While the timing window is extremely narrow, it is theoretically
    possible for an update to the TLB flush clock and a subsequent flush
    IPI to happen between the read and write parts of the update of the
    per-page stamp. Exclude this possibility by disabling interrupts
    across the update, preventing the IPI to be serviced in the middle.
    
    This is XSA-241.
    
    Reported-by: Jann Horn <jannh@xxxxxxxxxx>
    Suggested-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 23a183607a427572185fc51c76cc5ab11c00c4cc
    master date: 2017-10-12 14:48:25 +0200

commit 9f22d72cdb1fecdb26dc8bae1c3c97861adf7e57
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:38:27 2017 +0200

    x86: limit linear page table use to a single level
    
    That's the only way that they're meant to be used. Without such a
    restriction arbitrarily long chains of same-level page tables can be
    built, tearing down of which may then cause arbitrarily deep recursion,
    causing a stack overflow. To facilitate this restriction, a counter is
    being introduced to track both the number of same-level entries in a
    page table as well as the number of uses of a page table in another
    same-level one (counting into positive and negative direction
    respectively, utilizing the fact that both counts can't be non-zero at
    the same time).
    
    Note that the added accounting introduces a restriction on the number
    of times a page can be used in other same-level page tables - more than
    32k of such uses are no longer possible.
    
    Note also that some put_page_and_type[_preemptible]() calls are
    replaced with open-coded equivalents.  This seemed preferrable to
    adding "parent_table" to the matrix of functions.
    
    Note further that cross-domain same-level page table references are no
    longer permitted (they probably never should have been).
    
    This is XSA-240.
    
    Reported-by: Jann Horn <jannh@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 6987fc7558bdbab8119eabf026e3cdad1053f0e5
    master date: 2017-10-12 14:44:34 +0200

commit e0353b455ce8af495c8fe379d6c6832cb7f87651
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:37:57 2017 +0200

    x86/HVM: prefill partially used variable on emulation paths
    
    Certain handlers ignore the access size (vioapic_write() being the
    example this was found with), perhaps leading to subsequent reads
    seeing data that wasn't actually written by the guest. For
    consistency and extra safety also do this on the read path of
    hvm_process_io_intercept(), even if this doesn't directly affect what
    guests get to see, as we've supposedly already dealt with read handlers
    leaving data completely unitialized.
    
    This is XSA-239.
    
    Reported-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 0d4732ac29b63063764c29fa3bd8946daf67d6f3
    master date: 2017-10-12 14:43:26 +0200

commit 76f154986f8afa1077478b4681ea82b0bf16896c
Author: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
Date:   Thu Oct 12 15:37:21 2017 +0200

    x86/ioreq server: correctly handle bogus 
XEN_DMOP_{,un}map_io_range_to_ioreq_server arguments
    
    Misbehaving device model can pass incorrect XEN_DMOP_map/
    unmap_io_range_to_ioreq_server arguments, namely end < start when
    specifying address range. When this happens we hit ASSERT(s <= e) in
    rangeset_contains_range()/rangeset_overlaps_range() with debug builds.
    Production builds will not trap right away but may misbehave later
    while handling such bogus ranges.
    
    This is XSA-238.
    
    Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: d59e55b018cfb79d0c4f794041aff4fe1cd0d570
    master date: 2017-10-12 14:43:02 +0200

commit 9bac9102304f40cc5ba944d13dbcd05a63d4203f
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:36:54 2017 +0200

    x86/FLASK: fix unmap-domain-IRQ XSM hook
    
    The caller and the FLASK implementation of xsm_unmap_domain_irq()
    disagreed about what the "data" argument points to in the MSI case:
    Change both sides to pass/take a PCI device.
    
    This is part of XSA-237.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 6f17f5c43a3bd28d27ed8133b2bf513e2eab7d59
    master date: 2017-10-12 14:37:56 +0200

commit c7a43e30609b1a791b3d5f682551bd0fd08f1719
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:36:21 2017 +0200

    x86/IRQ: conditionally preserve irq <-> pirq mapping on map error paths
    
    Mappings that had been set up before should not be torn down when
    handling unrelated errors.
    
    This is part of XSA-237.
    
    Reported-by: HW42 <hw42@xxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 573ac7b22aba9e5b8d40d9cdccd744af57cd5928
    master date: 2017-10-12 14:37:26 +0200

commit 913d4f80c86ae14996b347d2f491769e345ca583
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:35:58 2017 +0200

    x86/MSI: disallow redundant enabling
    
    At the moment, Xen attempts to allow redundant enabling of MSI by
    having pci_enable_msi() return 0, and point to the existing MSI
    descriptor, when the msi already exists.
    
    Unfortunately, if subsequent errors are encountered, the cleanup
    paths assume pci_enable_msi() had done full initialization, and
    hence undo everything that was assumed to be done by that
    function without also undoing other setup that would normally
    occur only after that function was called (in map_domain_pirq()
    itself).
    
    Rather than try to make the redundant enabling case work properly, just
    forbid it entirely by having pci_enable_msi() return -EEXIST when MSI
    is already set up.
    
    This is part of XSA-237.
    
    Reported-by: HW42 <hw42@xxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: a46126fec20e0cf4f5442352ef45efaea8c89646
    master date: 2017-10-12 14:36:58 +0200

commit c5881c540fd27e12de2a2ac27504550527de6dde
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:35:30 2017 +0200

    x86: enforce proper privilege when (un)mapping pIRQ-s
    
    (Un)mapping of IRQs, just like other RESOURCE__ADD* / RESOURCE__REMOVE*
    actions (in FLASK terms) should be XSM_DM_PRIV rather than XSM_TARGET.
    This in turn requires bypassing the XSM check in physdev_unmap_pirq()
    for the HVM emuirq case just like is being done in physdev_map_pirq().
    The primary goal security wise, however, is to no longer allow HVM
    guests, by specifying their own domain ID instead of DOMID_SELF, to
    enter code paths intended for PV guest and the control domains of HVM
    guests only.
    
    This is part of XSA-237.
    
    Reported-by: HW42 <hw42@xxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: db72faf69c94513e180568006a9d899ed422ff90
    master date: 2017-10-12 14:36:30 +0200

commit b0239cd7269da15027971b5cf2e2a94d4b871876
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 12 15:34:58 2017 +0200

    x86: don't allow MSI pIRQ mapping on unowned device
    
    MSI setup should be permitted only for existing devices owned by the
    respective guest (the operation may still be carried out by the domain
    controlling that guest).
    
    This is part of XSA-237.
    
    Reported-by: HW42 <hw42@xxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 3308374b1be7d43e23bd2e9eaf23ec06d7959882
    master date: 2017-10-12 14:35:14 +0200
(qemu changes not included)

_______________________________________________
osstest-output mailing list
osstest-output@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.