[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [xen-unstable-coverity test] 101343: regressions - ALL FAIL
flight 101343 xen-unstable-coverity real [real] http://logs.test-lab.xenproject.org/osstest/logs/101343/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: coverity-amd64 6 coverity-upload fail REGR. vs. 101279 version targeted for testing: xen 84c1e7d8017c773c41d6e8b79384f37a67be1479 baseline version: xen b7dd797c7fe4cd849018f78f6c7b9eb3d33b89d8 Last test of basis 101279 2016-10-05 09:19:19 Z 4 days Testing same since 101343 2016-10-09 09:18:41 Z 0 days 1 attempts ------------------------------------------------------------ People who touched revisions under test: Jan Beulich <jbeulich@xxxxxxxx> Lan Tianyu <tianyu.lan@xxxxxxxxx> Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> jobs: coverity-amd64 fail ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. ------------------------------------------------------------ commit 84c1e7d8017c773c41d6e8b79384f37a67be1479 Author: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> Date: Fri Oct 7 11:35:58 2016 +0200 x86/hvm: remove emulation context setting from hvmemul_cmpxchg() hvmemul_cmpxchg() sets the read emulation context in p_new instead of p_old, which is inconsistent (and wrong). Since p_old is unused in any case and cmpxchg() semantics would be altered even if it wasn't, remove the emulation context setting code. Suggested-by: Jan Beulich <jbeulich@xxxxxxxx> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> commit ed7e33747da83ce805c00cd457e71075e34f0854 Author: Lan Tianyu <tianyu.lan@xxxxxxxxx> Date: Fri Oct 7 11:35:26 2016 +0200 timer: process softirq during dumping timer info Dumping timer info may run for a long time on the huge machine with a lot of physical cpus. To avoid triggering NMI watchdog, add process_pending_softirqs() in the loop of dumping timer info. Signed-off-by: Lan Tianyu <tianyu.lan@xxxxxxxxx> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> commit 9f5eff08a6a6f58645fb48382c843973674042c9 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Oct 5 14:20:10 2016 +0200 x86emul: check for FPU availability We can't exclude someone wanting to hide the FPU from guests. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper@xxxxxxxxxx> commit beeeaa920049c88af035b3dee8e20926d9d426f8 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Oct 5 14:19:43 2016 +0200 x86emul: deliver correct math exceptions #MF only applies to x87 instructions. SSE and AVX ones need #XM to be raised instead, unless CR4.OSXMMEXCPT is clear, in which case #UD needs to result. (But note that this is only a latent issue - we don't emulate any instructions so far which could result in #XM.) Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> commit cab9638a42457d2ab360c60ec419cdef4c75ca54 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Oct 5 14:18:42 2016 +0200 x86emul: honor guest CR4.OSFXSR and CR4.OSXSAVE These checks belong into the emulator instead of hvmemul_get_fpu(). The CR0.PE/EFLAGS.VM ones can actually just be ASSERT()ed, as decoding should make it impossible to get into get_fpu() with them in the wrong state. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> (qemu changes not included) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |