[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VMX status report. Xen:24911 & Dom0: d93dc5c4... Nested VMX testing?



On Wed, Mar 14, 2012 at 08:00:09AM +0000, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@xxxxxxxxxxxxx
> > [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Pasi K?rkk?inen
> > Sent: Tuesday, March 13, 2012 11:39 PM
> > To: Zhou, Chao
> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> > Subject: Re: [Xen-devel] VMX status report. Xen:24911 & Dom0:
> > d93dc5c4... Nested VMX testing?
> > 
> > On Tue, Mar 13, 2012 at 09:18:27AM +0000, Zhou, Chao wrote:
> > > Hi all,
> > >
> > 
> > Hello,
> > 
> > > This is the test report of xen-unstable tree. We've switched our Dom0 to
> > upstream Linux 3.1-rc7 instead of Jeremy's 2.6.32.x tree.
> > > We've also upgraded our nightly test system from RHEL5.5 to RHEL6.2.
> > > We found four new issues and one old issue got fixed.
> > >
> > 
> > Is Intel planning to start testing Nested VMX ?
> 
> Yes, we've made several automated test cases for Nested VMX.
>

Great! 

> The bad news is there's some bug on Nested VMX.
> From my recent test, the following is the status for Nested VMX.
>    Xen on Xen: failed. L1 Xen guest can't boot up. It hangs at the boot for 
> xen hypervisor.
>    KVM on Xen: pass. L2 RHEL5.5 guest can boot up on L1 KVM guest.
> (We use the same version of dom0 and xen-unstable as mentioned in the report.)
> Intel will make more effort on Nested VMX bug fixing this year.
> 

Ok, thanks for the results. I'm planning to test Nested VMX myself aswell in 
the near future..

-- Pasi

> 
> > It seems AMD has done a lot of testing with Nested SVM with Xen..
> > 
> > Thanks,
> > 
> > -- Pasi
> > 
> > 
> > > Version Info
> > >
> > ============================================================
> > =====
> > > xen-changeset:  24911:d7fe4cd831a0
> > > Dom0: linux.git  3.1-rc7 (commit: d93dc5c4...)
> > >
> > ============================================================
> > =====
> > >
> > >
> > > New issues(4)
> > > ==============
> > > 1. when detaching a VF from hvm guest, "xl dmesg" will show some
> > warning information
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1809
> > > 2. Dom0 hang when bootup a guest with a VF(the guest has been bootup
> > with a different VF before)
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1810
> > > 3. RHEL6.2/6.1 guest runs quite slow
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1811
> > > 4. after detaching a VF from a guest, shutdown the guest is very slow
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> > >
> > > Fixed issue(1)
> > > ==============
> > > 1. Dom0 crash on power-off
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1740
> > >     ----kernel3.1.0 doesn't have this issue now
> > >
> > > Old issues(5)
> > > ==============
> > > 1. [ACPI] System cann't resume after do suspend
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> > > 2. [XL]"xl vcpu-set" causes dom0 crash or panic
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730
> > > 3. [VT-D]fail to detach NIC from guest
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736
> > > 4. Sometimes Xen panic on ia32pae Sandybridge when restore guest
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747
> > > 5. [VT-D] device reset fail when create/destroy guest
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1752
> > >
> > >
> > > Thanks
> > > Zhou, Chao
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@xxxxxxxxxxxxx
> > > http://lists.xen.org/xen-devel
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.