[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.



On 12/20/18 4:28 PM, Paul Durrant wrote:
-----Original Message-----
From: Petre Ovidiu PIRCALABU [mailto:ppircalabu@xxxxxxxxxxxxxxx]
Sent: 20 December 2018 14:26
To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
<wei.liu2@xxxxxxxxxx>; Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>; Konrad
Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; George Dunlap
<George.Dunlap@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian
Jackson <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Julien
Grall <julien.grall@xxxxxxx>; Tamas K Lengyel <tamas@xxxxxxxxxxxxx>; Jan
Beulich <jbeulich@xxxxxxxx>; Roger Pau Monne <roger.pau@xxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
for sync requests.

On Thu, 2018-12-20 at 12:05 +0000, Paul Durrant wrote:
The memory for the asynchronous ring and the synchronous channels
will
be allocated from domheap and mapped to the controlling domain
using the
foreignmemory_map_resource interface. Unlike the current
implementation,
the allocated pages are not part of the target DomU, so they will
not be
reclaimed when the vm_event domain is disabled.

Why re-invent the wheel here? The ioreq infrastructure already does
pretty much everything you need AFAICT.

   Paul

I wanted preseve as much as possible from the existing vm_event DOMCTL
interface and add only the necessary code to allocate and map the
vm_event_pages.

That means we have two subsystems duplicating a lot of functionality though. It 
would be much better to use ioreq server if possible than provide a 
compatibility interface via DOMCTL.

Just to clarify the compatibility issue: there's a third element between Xen and the introspection application, the Linux kernel which needs to be fairly recent for the whole ioreq machinery to work. The quemu code also seems to fallback to the old way of working if that's the case.

This means that there's a choice to be made here: either we keep backwards compatibility with the old vm_event interface (in which case we can't drop the waitqueue code), or we switch to the new one and leave older setups in the dust (but there's less code duplication and we can get rid of the waitqueue code).

In any event, it's not very clear (to me, at least) how the envisioned ioreq replacement should work. I assume we're meant to use the whole infrastructure (as opposed to what we're now doing, which is to only use the map-hypervisor-memory part), i.e. both mapping and signaling. Could we discuss this in more detail? Are there any docs on this or ioreq minimal clients (like xen-access.c is for vm_event) we might use?


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.