[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V3] tools/libxc, xen/x86: Added xc_set_mem_access_multi()



On 09/06/2016 01:16 PM, Ian Jackson wrote:
> Razvan Cojocaru writes ("[PATCH V3] tools/libxc, xen/x86: Added 
> xc_set_mem_access_multi()"):
>> Currently it is only possible to set mem_access restrictions only for
>> a contiguous range of GFNs (or, as a particular case, for a single GFN).
>> This patch introduces a new libxc function taking an array of GFNs.
>> The alternative would be to set each page in turn, using a userspace-HV
>> roundtrip for each call, and triggering a TLB flush per page set.
>>
>> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> 
> I have no objection with my tools maintainer hat on.  But I have a
> question for you and/or the hypervisor maintainers:
> 
> Could this aim be achieved with a multicall ?  (Can multicalls defer
> the TLB flush?)

I assume your question is: could we do multiple xc_set_mem_access()
calls and then call something like xc_tlb_flush() at the end of it,
instead of a singke xc_set_mem_access_multi() call?

If that's the question, the answer (to the best of my knowledge) is that
yes, that would be achievable (again, to the best of my knowledge that
would still require a patch).

But that would still be suboptimal performance-wise. Each
xc_set_mem_access() call is a userspace <-> hypervisor round-trip. A
typical introspection application follows the basic xen-access.c model,
where you receive an event, do things in response to it, then reply (at
which point, usually the VCPU is allowed to resume running). If, as part
of processing the event, you'd like to set access restrictions on
hundreds of pages, that would require hundreds of xc_set_mem_access()
calls, and would definitely incur noticeable overhead.

In short, I believe that there's a very strong case to be made for this
approach vs. multiple calls.

I hope I've understood and addressed your question.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.