[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VMX status report for Xen:26783 & Dom0:3.8.6



> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@xxxxxxxxxx]
> Sent: Monday, May 06, 2013 11:00 PM
> To: Ren, Yongjie
> Cc: xen-devel@xxxxxxxxxxxxx; Xu, YongweiX; Liu, SongtaoX
> Subject: Re: [Xen-devel] VMX status report for Xen:26783 & Dom0:3.8.6
> 
> On Mon, Apr 15, 2013 at 12:42:41PM +0000, Ren, Yongjie wrote:
> > Hi, All,
> > This is the test report for xen-unstable tree on some Intel platforms.
> > We found 2 new bugs and 3 fixed bugs.
> >
> > Version Info:
> >
> ============================================================
> =====
> > Xen changeset:      26783:6d9e1f986e37
> > Dom0:                       Linux 3.8.6
> > Upstream QEMU:      47b5264eb3e1cd2825e48d28fd0d1b239ed53974
> >
> ============================================================
> =====
> >
> > New issues(2):
> > ==============
> > 1. Xen hvm guest vcpu stuck when setting vcpus more than 32 (already
> fixed)
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1842  (fixed)
> 
> So this shows the c/s 26803 has the fix, but looking at 26803 I see:
> 
> xen: arm: remove PSR_MODE_MASK from public interface.
> ?
> 
Oh, my colleague (Yongwei)'s comment is not very accurate. 
He just means c/s 26807 can work fine with this bug. It doesn't mean the 
specific c/s 26807 is the bug fix c/s. 
I think some c/s before 26807 (and after 26440) fixed this bug.

> Somehow I thought that the issue you faced would have been fixed with:
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c00e71c4dd07
> (c/s 26860 ?)
> 
I don't think so. :-)  because c/s 26807 can work well for the 32 vcpus issue.

> > 2. Live migration fail when migrating the same guest for more than 2
> times
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1845
> 
> And this particular guest worked with what version of Xen?
> 
c/s 26532 (with traditional qemu-xen) can work fine for this bug.
For xen c/s 26580, the repeatedly migration failed at the 2rd time.

For the latest c/s 26961, we use qemu-upstream instead of traditional
qemu-xen, then this bug can't be reproduced now.

We'll have more update in the bugzilla before sending a new VMX status report.

> >
> > Fixed issues(3):
> > ==============
> > 1. [VT-D] VNC console broken after detach NIC from guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1736
> > 2. sometimes live migration failed and reported call trace in dom0
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1841
> > 3. Xen hvm guest vcpu stuck when setting vcpus more than 32
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1842
> >
> > Old issues(10)
> > ==============
> > 1. [ACPI] Dom0 can't resume from S3 sleep
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> > 2. [XL]"xl vcpu-set" causes dom0 crash or panic
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1730
> 
> Hmm and it looks as if v3.8 is still hitting this.
> Could you provide me with your .config please?
> 
Yes, attached our .config file for Linux 3.8.9 as Dom0 kernel.

> > 3. Sometimes Xen panic on ia32pae Sandybridge when restore guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1747
> > 4. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 
> 
> And this is with RHEL6.2. It looks to work for me, but then I am
> testing it with upstream kernels.
> 
After upgrading the guest kernel to 3.8.9, basically it can work.
'xl vcpu-set' can increase and decrease guest vcpu number with traditional
qemu-xen. But, if I boot guest with 4 vcpus and 'maxvcpus=16', then use 
'xl vcpu-set $dom_id 16' CMD to increase 12 vcpus in guest, the guest will 
call trace reporting "rcu_shed detected stalls on cpu/tasks".
Increasing or decreasing vcpu number about 2 or 4 each time, it can work well.
Note that 'maxvcpus=X' config can't be used with qemu-upstream.
see: http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837

We'll have more update in the bugzilla before sending a new VMX status report.

> > 5. Dom0 cannot be shutdown before PCI device detachment from guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > 6. xl pci-list shows one PCI device (PF or VF) could be assigned to two
> different guests
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1834
> > 7. [upstream qemu] Guest free memory with upstream qemu is 14MB
> lower than that with qemu-xen-unstable.git
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1836
> > 8. [upstream qemu]'maxvcpus=NUM' item is not supported in upstream
> QEMU
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1837
> > 9. [upstream qemu] Guest console hangs after save/restore or
> live-migration when setting 'hpet=0' in guest config file
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1838
> > 10. [upstream qemu] 'xen_platform_pci=0' setting cannot make the
> guest use emulated PCI devices by default
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1839
> >
> >
> > Best Regards,
> >      Yongjie (Jay)
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> >

Attachment: config-3.8.9
Description: config-3.8.9

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.