[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?



On Mon, 6 Jun 2016, Wei Liu wrote:
> On Mon, Jun 06, 2016 at 02:07:46PM +0100, Stefano Stabellini wrote:
> > On Mon, 6 Jun 2016, Jan Beulich wrote:
> > > >>> On 03.06.16 at 18:35, <wei.liu2@xxxxxxxxxx> wrote:
> > > > I got a patch ready.  But QEMU upstream refuses to start on the 
> > > > receiving end
> > > > with following error message:
> > > > 
> > > > qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
> > > > qemu-system-i386: load of migration failed: Invalid argument
> > > > 
> > > > With QEMU traditional HVM guest and PV guest, the guest works fine -- up
> > > > and running with all hot plugged cpus available.
> > > > 
> > > > So I think the relevant libxl information is transmitted but we also
> > > > need to fix QEMU upstream. But that's a separate issue.
> > 
> > For clarity, you have applied the patch below, started a VM, hotplugged
> > a vcpu, rebooted the guest, then migrated the VM, but at this point
> > there is an error?
> > 
> 
> Apply this patch, start a guest, hotplug some cpus, make them online
> inside guest, and xl migrate guest localhost.
> 
> You will see this in qemu log.
> 
> > What are the QEMU command line arguments at the receiving side? Are you
> > sure that the increased vcpu count is passed to the receiving end by
> > libxl? It looks like QEMU has been started passing the old vcpu count as
> > command line argument (-smp etc) at the receiving end.
> > 
> 
> The QEMU command line should be the same on the sending side.
> 
> The -smp 1,maxvcpus=4 (something like that).
> 
> Does that mean we need to somehow alter QEMU's command line?

Yes, that's right. The device state that we pass to QEMU is just the
state of the devices specified by the command line args.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.