[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE]..



On Wed, Sep 28, 2016 at 05:42:21AM -0400, Konrad Rzeszutek Wilk wrote:
> Specifically:
> 
> XEN_SYSCTL_TMEM_OP_SET_[WEIGHT,COMPRESS] are now done via:
> 
>  XEN_SYSCTL_TMEM_SET_CLIENT_INFO
> 
> and XEN_SYSCTL_TMEM_OP_SAVE_GET_[VERSION,MAXPOOLS,
> CLIENT_WEIGHT, CLIENT_FLAGS] can now be retrieved via:
> 
>  XEN_SYSCTL_TMEM_GET_CLIENT_INFO
> 
> All this information is now in 'struct tmem_client' and
> that is what we pass around.
> 
> Hypervisor wise we had to do a bit of casting. The
> 'struct xen_sysctl_tmem_op'->buf variable is a pointer to
> char. Casting that using the guest_handle_cast to a structure
> (struct tmem_client) does not work. Hence we first have to
> cast it a void and then to xen_sysctl_tmem_client_t.
> This could be improved by having an union and in fact the
> patch titled:
> "tmem: Handle 'struct tmem_info' as a seperate field in the"
> does that. It could be squashed in here but that can make
> it harder to review.
> 
> On the toolstack, prior to this patch, the xc_tmem_control
> would use the bounce buffer only when arg1 was set and the cmd
> was to list. With the 'XEN_SYSCTL_TMEM_OP_SET_[WEIGHT|COMPRESS]'
> that made sense as the 'arg1' would have the value. However
> for the other ones (say XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID)
> the 'arg1' would be the length of the 'buf'. If this
> confusing don't despair, patch patch titled:
> tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg.
> takes care of that.
> 
> The acute reader of the toolstack code will discover that
> we only used the bounce buffer for LIST, not for any other
> subcommands that used 'buf'!?! Which means that the contents
> of 'buf' would never be copied back to the calleer 'buf'!
> 
> I am not sure how this could possibly work, perhaps Xen 4.1
> (when this was introduced) was more relaxed about the bounce buffer
> being enabled. Anyhow this fixes xc_tmem_control to do it for
> any subcommand that has 'arg1'.
> 
> Lastly some of the checks in xc_tmem_[restore|save] are removed
> as they can't ever be reached (not even sure how they could
> have been reached in the original submission). One of them
> is the check for the weight against -1 when in fact the
> hypervisor would never have provided that value.
> 
> Now the checks are simple - as the hypercall always returns
> ->version and ->maxpools (which is mirroring how it was done
> prior to this patch). But if one wants to check the if a guest
> has any tmem activity then the patch titled
> "tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_
> [FLAGS,NPAGES,UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS."
> adds an ->nr_pools to check for that.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> ---
> ---
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> 
> v1: New submission.
> ---
>  tools/libxc/xc_tmem.c             | 72 ++++++++++++----------------
>  tools/libxl/libxl.c               | 26 +++++++---
>  tools/python/xen/lowlevel/xc/xc.c |  2 -

Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

I welcome the effort to clean up tmem but I don't have any knowledge how
it works, nor can I test your code, so I can only trust your judgement
on this.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.