[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.



On 1/10/19 11:58 AM, Paul Durrant wrote:
-----Original Message-----


Why re-invent the wheel here? The ioreq infrastructure already does
pretty much everything you need AFAICT.

    Paul

I wanted preseve as much as possible from the existing vm_event DOMCTL
interface and add only the necessary code to allocate and map the
vm_event_pages.

That means we have two subsystems duplicating a lot of functionality
though. It would be much better to use ioreq server if possible than
provide a compatibility interface via DOMCTL.

Just to clarify the compatibility issue: there's a third element between
Xen and the introspection application, the Linux kernel which needs to
be fairly recent for the whole ioreq machinery to work. The qemu code
also seems to fallback to the old way of working if that's the case.


Tht'a corrent. For IOREQ server there is a fall-back mechanism when privcmd 
doesn't support resource mapping.

This means that there's a choice to be made here: either we keep
backwards compatibility with the old vm_event interface (in which case
we can't drop the waitqueue code), or we switch to the new one and leave
older setups in the dust (but there's less code duplication and we can
get rid of the waitqueue code).


I don't know what your compatibility model is. QEMU needs to maintain 
compatibility across various different versions of Xen and Linux so there are 
many shims and much compat code. You may not need this.

Our current model is: deploy a special guest (that we call a SVA - short for security virtual appliance), with its own kernel and applications, that for all intents and purposes will act dom0-like.

So in that scenario, we control the guest kernel so backwards compatibility for the case where the kernel does not support the proper ioctl is not a priority. That said, it might very well be an issue for someone, and we'd like to be well-behaved citizens and not inconvenience other vm_event consumers. Tamas, is this something you'd be concerned about?

What we do care about is being able to fallback in the case where the host hypervisor does not know anything about the new ioreq infrastructure. IOW, nobody can stop a client from running a Xen 4.7-based XenServer on top of which our introspection guest will not be able to use the new ioreq code even if it's using the latest kernel. But that can be done at application level and would not require hypervisor-level backwards compatibility support (whereas in the first case - an old kernel - it would).

On top of all of this there's Andrew's concern of being able to get rid of the current vm_event waitqueue code that's making migration brittle.

So, if I understand the situation correctly, we need to negotiate the following:

1. Should we try to switch to the ioreq infrastructure for vm_event or use our custom one? If I'm remembering things correctly, Paul and Jan are for it, Andrew is somewhat against it, Tamas has not expressed a preference.

2. However we approach the new code, should we or should we not also provide a backwards compatibility layer in the hypervisor? We don't need it, but somebody might and it's probably not a good idea to design based entirely on the needs of one use-case. Tamas may have different needs here, and maybe other members of the xen-devel community as well. Andrew prefers that we don't since that removes the waitqueue code.

To reiterate how this got started: we want to move the ring buffer memory from the guest to the hypervisor (we've had cases of OSes reclaiming that page after the first introspection application exit), and we want to make that memory bigger (so that more events will fit into it, carrying more information (bigger events)). That's essentially all we're after.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.