[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable test] 6894: regressions - FAIL



flight 6894 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/6894/

Regressions :-(

Tests which did not succeed and are blocking:
 test-i386-i386-win            7 windows-install            fail REGR. vs. 6878

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install             fail pass in 6890
 test-i386-i386-xl-win         7 windows-install              fail pass in 6890

Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-rhel6hvm-amd  8 guest-saverestore            fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-xcpkern-i386-rhel6hvm-amd  8 guest-saverestore      fail never pass
 test-amd64-xcpkern-i386-rhel6hvm-intel  8 guest-saverestore    fail never pass
 test-amd64-xcpkern-i386-win  16 leak-check/check             fail   never pass
 test-amd64-xcpkern-i386-xl-win 13 guest-stop                   fail never pass
 test-i386-xcpkern-i386-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  3539ef956a37
baseline version:
 xen                  381ab77db71a

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@xxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxxxxx>
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Keir Fraser <keir@xxxxxxx>
  Olaf Hering <olaf@xxxxxxxxx>
  Tim Deegan <Tim.Deegan@xxxxxxxxxx>
  Wei Wang <wei.wang2@xxxxxxx>
------------------------------------------------------------

jobs:
 build-i386-xcpkern                                           pass     
 build-amd64                                                  pass     
 build-i386                                                   pass     
 build-amd64-oldkern                                          pass     
 build-i386-oldkern                                           pass     
 build-amd64-pvops                                            pass     
 build-i386-pvops                                             pass     
 test-amd64-amd64-xl                                          pass     
 test-amd64-i386-xl                                           pass     
 test-i386-i386-xl                                            pass     
 test-amd64-xcpkern-i386-xl                                   pass     
 test-i386-xcpkern-i386-xl                                    pass     
 test-amd64-i386-rhel6hvm-amd                                 fail     
 test-amd64-xcpkern-i386-rhel6hvm-amd                         fail     
 test-amd64-i386-xl-credit2                                   pass     
 test-amd64-xcpkern-i386-xl-credit2                           pass     
 test-amd64-i386-rhel6hvm-intel                               fail     
 test-amd64-xcpkern-i386-rhel6hvm-intel                       fail     
 test-amd64-i386-xl-multivcpu                                 pass     
 test-amd64-xcpkern-i386-xl-multivcpu                         pass     
 test-amd64-amd64-pair                                        pass     
 test-amd64-i386-pair                                         pass     
 test-i386-i386-pair                                          pass     
 test-amd64-xcpkern-i386-pair                                 pass     
 test-i386-xcpkern-i386-pair                                  pass     
 test-amd64-amd64-pv                                          pass     
 test-amd64-i386-pv                                           pass     
 test-i386-i386-pv                                            pass     
 test-amd64-xcpkern-i386-pv                                   pass     
 test-i386-xcpkern-i386-pv                                    pass     
 test-amd64-i386-win-vcpus1                                   fail     
 test-amd64-i386-xl-win-vcpus1                                fail     
 test-amd64-amd64-win                                         fail     
 test-amd64-i386-win                                          fail     
 test-i386-i386-win                                           fail     
 test-amd64-xcpkern-i386-win                                  fail     
 test-i386-xcpkern-i386-win                                   fail     
 test-amd64-amd64-xl-win                                      fail     
 test-i386-i386-xl-win                                        fail     
 test-amd64-xcpkern-i386-xl-win                               fail     


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23245:3539ef956a37
tag:         tip
user:        Keir Fraser <keir@xxxxxxx>
date:        Mon Apr 18 18:34:45 2011 +0100
    
    tools: hvmloader: attempt to SHUTDOWN_crash on BUG
    
    Executing UD2 (invalid opcode) triggers a triple fault which signals
    reboot to the toolstack, rather than crash.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    
    
changeset:   23244:024b06de81ca
user:        Keir Fraser <keir@xxxxxxx>
date:        Mon Apr 18 18:08:47 2011 +0100
    
    hvmloader: Fix _start-relative calculation of hypercall page address.
    
    We got away with it because _start-HYPERCALL_PHYSICAL_ADDRESS happens
    to equal HYPERCALL_PHYSICAL_ADDRESS.
    
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    
    
changeset:   23243:5e445a5a8eef
user:        Wei Wang <wei.wang2@xxxxxxx>
date:        Mon Apr 18 17:24:21 2011 +0100
    
    x86/mm: Add a generic interface for vtd and amd iommu p2m sharing.
    Also introduce a new parameter (iommu=sharept) to enable this feature.
    
    Signed-off-by: Wei Wang <wei.wang2@xxxxxxx>
    Acked-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    Committed-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    
    
changeset:   23242:835550a0c6c0
user:        Wei Wang <wei.wang2@xxxxxxx>
date:        Mon Apr 18 17:24:21 2011 +0100
    
    x86/mm: Implement p2m table sharing for AMD IOMMU.
    
    Signed-off-by: Wei Wang <wei.wang2@xxxxxxx>
    Acked-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    Committed-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    
    
changeset:   23241:e37b600d5f14
user:        Wei Wang <wei.wang2@xxxxxxx>
date:        Mon Apr 18 17:24:21 2011 +0100
    
    x86/mm: add AMD IOMMU control bits to p2m entries.
    
    This patch adds next levels bit into bit 9 - bit 11 of p2m entries and
    adds r/w permission bits into bit 61- bit 62 of p2m entries.
    
    Signed-off-by: Wei Wang <wei.wang2@xxxxxxx>
    Acked-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    Committed-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    
    
changeset:   23240:78145a98915c
user:        Wei Wang <wei.wang2@xxxxxxx>
date:        Mon Apr 18 17:24:21 2011 +0100
    
    x86/mm: Move p2m type into bits of the PTE that the IOMMU doesn't use.
    
    AMD IOMMU hardware uses bit 9 - bit 11 to encode lower page levels. p2m
    type bits in p2m flags has to be shifted from bit 9 to bit 12.  Also,
    bit 52 to bit 60 cannot be non-zero for iommu pde. So, the definition of
    p2m_ram_rw has to be swapped with p2m_invalid.
    
    Signed-off-by: Wei Wang <wei.wang2@xxxxxxx>
    Acked-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    Committed-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    
    
changeset:   23239:51d89366c859
user:        Olaf Hering <olaf@xxxxxxxxx>
date:        Mon Apr 18 15:12:04 2011 +0100
    
    xentrace: correct overflow check for number of per-cpu trace pages
    
    The calculated number of per-cpu trace pages is stored in t_info and
    shared with tools like xentrace. Since its an u16 the value may
    overflow because the current check is based on u32.  Using the u16
    means each cpu could in theory use up to 256MB as trace
    buffer. However such a large allocation will currently fail on x86 due
    to the MAX_ORDER limit.  Check both max theoretical number of pages
    per cpu and max number of pages reachable by struct t_buf->prod/cons
    variables with requested number of pages.
    
    Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    
    
changeset:   23238:60f5df2afcbb
user:        Keir Fraser <keir@xxxxxxx>
date:        Mon Apr 18 13:36:10 2011 +0100
    
    svm: implement instruction fetch part of DecodeAssist (on #PF/#NPF)
    
    Newer SVM implementations (Bulldozer) copy up to 15 bytes from the
    instruction stream into the VMCB when a #PF or #NPF exception is
    intercepted. This patch makes use of this information if available.
    This saves us from a) traversing the guest's page tables, b) mapping
    the guest's memory and c) copy the instructions from there into the
    hypervisor's address space.
    This speeds up #NPF intercepts quite a lot and avoids cache and TLB
    trashing.
    
    Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    
    
changeset:   23237:381ab77db71a
user:        Keir Fraser <keir@xxxxxxx>
date:        Mon Apr 18 10:10:02 2011 +0100
    
    svm: decode-assists feature must depend on nextrip feature.
    
    ...since the decode-assist fast paths assume nextrip vmcb field is
    valid.
    
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.