[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 9/9] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS.



On 30/09/16 19:11, Konrad Rzeszutek Wilk wrote:
> These operations are used during the save process of migration.
> Instead of doing 64 hypercalls lets do just one. We modify
> the 'struct xen_tmem_client' structure (used in
> XEN_SYSCTL_TMEM_OP_[GET|SET]_CLIENT_INFO) to have an extra field
> 'nr_pools'. Armed with that the code slurping up pages from the
> hypervisor can allocate a big enough structure (struct tmem_pool_info)
> to contain all the active pools. And then just iterate over each
> one and save it in the stream.
>
> We are also re-using one of the subcommands numbers for this,
> as such the XEN_SYSCTL_INTERFACE_VERSION should be incremented
> and that was done in the patch titled:
> "tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE].."
>
> In the xc_tmem_[save|restore] we also added proper memory handling
> of the 'buf' and 'pools'. Because of the loops and to make it as
> easy as possible to review we add a goto label and for almost
> all error conditions jump in it.
>
> The include for inttypes is required for the PRId64 macro to
> work (which is needed to compile this code under 32-bit).
>
> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>

Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.