[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable test] 5159: regressions - FAIL



flight 5159 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/5159/

Regressions :-(

Tests which did not succeed and are blocking:
 test-amd64-xcpkern-i386-xl-multivcpu 14 guest-localmigrate/x10 fail REGR. vs. 
5145
 test-i386-i386-xl-win         5 xen-boot                   fail REGR. vs. 5145

Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win       7 windows-install              fail   never pass
 test-amd64-i386-rhel6hvm-amd  8 guest-saverestore            fail   never pass
 test-amd64-i386-rhel6hvm-intel  8 guest-saverestore            fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1  7 windows-install              fail  never pass
 test-amd64-xcpkern-i386-rhel6hvm-amd  8 guest-saverestore      fail never pass
 test-amd64-xcpkern-i386-rhel6hvm-intel  8 guest-saverestore    fail never pass
 test-amd64-xcpkern-i386-win  16 leak-check/check             fail   never pass
 test-amd64-xcpkern-i386-xl-win  7 windows-install              fail never pass
 test-i386-i386-win           14 guest-start.2                fail    like 5124
 test-i386-xcpkern-i386-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  003acf02d416
baseline version:
 xen                  051a1b1b8f8a

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Kouya Shimura <kouya@xxxxxxxxxxxxxx>
  Tim Deegan <Tim.Deegan@xxxxxxxxxx>
------------------------------------------------------------

jobs:
 build-i386-xcpkern                                           pass     
 build-amd64                                                  pass     
 build-i386                                                   pass     
 build-amd64-oldkern                                          pass     
 build-i386-oldkern                                           pass     
 build-amd64-pvops                                            pass     
 build-i386-pvops                                             pass     
 test-amd64-amd64-xl                                          pass     
 test-amd64-i386-xl                                           pass     
 test-i386-i386-xl                                            pass     
 test-amd64-xcpkern-i386-xl                                   pass     
 test-i386-xcpkern-i386-xl                                    pass     
 test-amd64-i386-rhel6hvm-amd                                 fail     
 test-amd64-xcpkern-i386-rhel6hvm-amd                         fail     
 test-amd64-i386-xl-credit2                                   pass     
 test-amd64-xcpkern-i386-xl-credit2                           pass     
 test-amd64-i386-rhel6hvm-intel                               fail     
 test-amd64-xcpkern-i386-rhel6hvm-intel                       fail     
 test-amd64-i386-xl-multivcpu                                 pass     
 test-amd64-xcpkern-i386-xl-multivcpu                         fail     
 test-amd64-amd64-pair                                        pass     
 test-amd64-i386-pair                                         pass     
 test-i386-i386-pair                                          pass     
 test-amd64-xcpkern-i386-pair                                 pass     
 test-i386-xcpkern-i386-pair                                  pass     
 test-amd64-amd64-pv                                          pass     
 test-amd64-i386-pv                                           pass     
 test-i386-i386-pv                                            pass     
 test-amd64-xcpkern-i386-pv                                   pass     
 test-i386-xcpkern-i386-pv                                    pass     
 test-amd64-i386-win-vcpus1                                   fail     
 test-amd64-i386-xl-win-vcpus1                                fail     
 test-amd64-amd64-win                                         fail     
 test-amd64-i386-win                                          fail     
 test-i386-i386-win                                           fail     
 test-amd64-xcpkern-i386-win                                  fail     
 test-i386-xcpkern-i386-win                                   fail     
 test-amd64-amd64-xl-win                                      fail     
 test-i386-i386-xl-win                                        fail     
 test-amd64-xcpkern-i386-xl-win                               fail     


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   22787:003acf02d416
tag:         tip
user:        Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
date:        Thu Jan 20 17:04:06 2011 +0000
    
    libxl: Make domain_shutdown fail if graceful not possible
    
    Currently "xl shutdown" (like "xm shutdown") is not capable of doing
    the proper ACPI negotiation with an HVM no-pv-drivers guest which
    would be necessary for a graceful shutdown.
    
    Instead (following the ill-advised lead of "xm shutdown") it simply
    shoots the guest in the head.
    
    This patch changes the behaviour so that "xl shutdown" fails if the
    domain cannot be shut down gracefully for this reason and suggests in
    the error message using destroy instead.
    
    Also, check whether the PV shutdown protocol is available before we
    try to use it.
    
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22786:6ee4b87d1863
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Thu Jan 20 16:45:54 2011 +0000
    
    QEMU_TAG update
    
    
changeset:   22785:3a89585d77b1
user:        Kouya Shimura <kouya@xxxxxxxxxxxxxx>
date:        Thu Jan 20 16:41:23 2011 +0000
    
    xend: pci.py: fix open file descriptor leak
    
    I got the following error:
        $ xm pci-list-assignable-devices
        Error: [Errno 24] Too many open files
    
    Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>
    
    
changeset:   22784:0592d6ca9177
user:        Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
date:        Thu Jan 20 16:32:33 2011 +0000
    
    libxl: do not attempt to write "shutdown" dm-command
    
    libxl_domain_destroy writes the command "shutdown" to the xenstore
    node with which it communicates with qemu.  However:
     - qemu does not understand this command and ignores it (printing a
       message)
     - libxl doesn't wait for the answer and immediately pauses the domain
       anyway
     - destroy is the ungraceful (force) operation and should not
       negotiate with qemu anyway
     - even in the graceful shutdown case, there would actually be nothing
       that qemu needs to do.
    
    Under some circumstances, this entry in xenstore will survive the
    domain's death, ie be leaked.
    
    So remove the erroneous code.
    
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22783:051a1b1b8f8a
user:        Keir Fraser <keir@xxxxxxx>
date:        Wed Jan 19 18:24:26 2011 +0000
    
    Disable tmem by default for 4.1 release.
    
    Although one major source of order>0 allocations has been removed,
    others still remain, so re-disable tmem until the issue can be fixed
    properly.
    
    Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.