[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 4/7] x86/vmx: add do_vmtrace_op



On Wed, Jun 17, 2020 at 09:13:05PM +0200, Michał Leszczyński wrote:
> ----- 16 cze 2020 o 19:23, Roger Pau Monné roger.pau@xxxxxxxxxx napisał(a):
> 
> > On Tue, Jun 16, 2020 at 05:22:06PM +0200, Michał Leszczyński wrote:
> >> +        buf_order = get_order_from_bytes(a.size);
> >> +
> >> +        if ( (a.size >> PAGE_SHIFT) != (1 << buf_order) ) {
> > 
> > Oh here is the check. I think you can move this with the checks above
> > by doing a.size & ~PAGE_MASK.
> 
> 
> I belive it's more strict than a.size & ~PAGE_MASK. I think that CPU expects 
> that the buffer size is a power of 2, so you can have 64 MB or 128 MB, but 
> not 96 MB buffer.

Oh, sorry, didn't realize. I think it's clearer to check if a size is
not a power of two by doing (size & (size - 1)). This could be joined
with the previous checks.

> > There are a couple of options here, maybe the caller could provide
> > it's own buffer, then Xen would take an extra reference to those pages
> > and setup them to be used as buffers.
> > 
> > Another alternative would be to use domhep memory but not let the
> > caller map it directly, and instead introduce a hypercall to copy
> > from the internal Xen buffer into a user-provided one.
> > 
> > How much memory is used on average by those buffers? That would help
> > decide a model that would best fit the usage.
> 
> 
> From 4 kB to 4 GB. Right now I use 128 MB buffers and it takes just a few 
> seconds to fill them up completely.
> 
> I think I've just copied the pattern which is already present in Xen's code, 
> e.g. interfaces used by xenbaked/xentrace tools.

I think using XENMEM_acquire_resource will result in cleaner code
overall, it would also avoid having to share the pages with Xen
AFAICT. It's also more inline with how new interfaces deal with this
kind of memory sharing.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.