[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: QEMU bumping memory limit and domain restore



On 02/06/15 16:49, Ian Campbell wrote:
> On Tue, 2015-06-02 at 15:08 +0100, Wei Liu wrote:
> [...]
>>> So here is a proof of concept patch to record and honour that value
>>> during migration.  A new field is added in IDL. Note that we don't
>>> provide xl level config option for it and mandate it to be default value
>>> during domain creation. This is to prevent libxl user from using it to
>>> avoid unforeseen repercussions.
>> [...]
>>> This field is mandated to be default value during guest creation to
>>> avoid unforeseen repercussions. It's only honour when restoring a guest.
> IMHO this means that the libxl API/IDL is the wrong place for this
> value. Only user and/or application serviceable parts belong in the API.
>
> So while I agree that this value need to be communicated across a
> migration, the JSON blob is not the right mechanism for doing so. IOW if
> you go down this general path I think you need a new
> field/record/whatever in the migration protocol at some layer or other
> (if not libxc then at the libxl layer).
>
> To my mind this "actual state" vs "user configured state" is more akin
> to the sorts of things which is in the hypervisor save blob or something
> like that (nb: This is not a suggestion that it should go there).
>
> IIRC Don also outlined another case, which is
>     xl create -p
>     xl migrate
>     xl unpause
>
> Which might need more thought if any bumping can happen after the
> migrate i.e. on unpause?
>
>

The problem is qemu using set_max_mem.  It should never have done so.

Nothing other than libxl should be using such hypercalls, at which point
libxls idea of guest memory is accurate and the bug ceases to exist.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.