[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [openxt-dev] VirtIO-Argo initial development proposal
On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote: > On Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@xxxxxx> wrote: > > On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark > > <christopher.w.clark@xxxxxxxxx> wrote: > >> Hi all, > >> > >> I have written a page for the OpenXT wiki describing a proposal for > >> initial development towards the VirtIO-Argo transport driver, and the > >> related system components to support it, destined for OpenXT and > >> upstream projects: > >> > >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1 Thanks for the detailed document, I've taken a look and there's indeed a lot of work to do listed there :). I have some suggestion and questions. Overall I think it would be easier for VirtIO to take a new transport if it's not tied to a specific hypervisor. The way Argo is implemented right now is using hypercalls, which is a mechanism specific to Xen. IMO it might be easier to start by having an Argo interface using MSRs, that all hypervisors can implement, and then base the VirtIO implementation on top of that interface. It could be presented as a hypervisor agnostic mediated interface for inter-domain communication or some such. That kind of links to a question, has any of this been discussed with the VirtIO folks, either at OASIS or the Linux kernel? The document mentions: "Destination: mainline Linux kernel, via the Xen community" regarding the upstreamability of the VirtIO-Argo transport driver, but I think this would have to go through the VirtIO maintainers and not the Xen ones, hence you might want their feedback quite early to make sure they are OK with the approach taken, and in turn this might also require OASIS to agree to have a new transport documented. > >> > >> Please review ahead of tomorrow's OpenXT Community Call. > >> > >> I would draw your attention to the Comparison of Argo interface options > >> section: > >> > >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Comparison-of-Argo-interface-options > >> > >> where further input to the table would be valuable; > >> and would also appreciate input on the IOREQ project section: > >> > >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-IOREQ-for-VirtIO-Argo > >> > >> in particular, whether an IOREQ implementation to support the > >> provision of devices to the frontends can replace the need for any > >> userspace software to interact with an Argo kernel interface for the > >> VirtIO-Argo implementation. > >> > >> thanks, > >> Christopher > > > > Hi, > > > > Really excited to see this happening, and disappointed that I'm not > > able to contribute at this time. I don't think I'll be able to join > > the call, but wanted to share some initial thoughts from my > > middle-of-the-night review anyway. > > > > Super rough notes in raw unedited notes-to-self form: > > > > main point of feedback is: I love the desire to get a non-shared-mem > > transport backend for virtio standardized. It moves us closer to an > > HMX-only world. BUT: virtio is relevant to many hypervisors beyond > > Xen, not all of which have the same views on how policy enforcement > > should be done, namely some have a preference for capability-oriented > > models over type-enforcement / MAC models. It would be nice if any > > labeling encoded into the actual specs / guest-boundary protocols > > would be strictly a mechanism, and be policy-agnostic, in particular > > not making implicit assumptions about XSM / SELinux / similar. I don't > > have specific suggestions at this point, but would love to discuss. > > > > thoughts on how to handle device enumeration? hotplug notifications? > > - can't rely on xenstore > > - need some internal argo messaging for this? > > - name service w/ well-known names? starts to look like xenstore > > pretty quickly... > > - granular disaggregation of backend device-model providers desirable I'm also curious about this part and I was assuming this would be done using some kind of Argo messages, but there's no mention in the document. Would be nice to elaborate a little more about this in the document. > > how does resource accounting work? each side pays for their own delivery > > ring? > > - init in already-guest-mapped mem & simply register? > > - how does it compare to grant tables? > > - do you need to go through linux driver to alloc (e.g. xengntalloc) > > or has way to share arbitrary otherwise not-special userspace pages > > (e.g. u2mfn, with all its issues (pinning, reloc, etc.))? > > > > ioreq is tangled with grant refs, evt chans, generic vmexit > > dispatcher, instruction decoder, etc. none of which seems desirable if > > trying to move towards world with strictly safer guest interfaces > > exposed (e.g. HMX-only) I think this needs Christopher's clarification, but it's my understanding that the Argo transport wouldn't need IOREQs at all, since all data exchange would be done using the Argo interfaces, there would be no MMIO emulation or anything similar. The mention about IOREQs is because the Arm folks are working on using IOREQs in Arm to enable virtio-mmio on Xen. Fro my reading of the document, it seem Argo VirtIO would still rely on event channels, it would IMO be better if instead interrupts are delivered using a native mechanism, something like MSI delivery by using a destination APIC ID, vector, delivery mode and trigger mode. Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |