[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [openxt-dev] VirtIO-Argo initial development proposal
On Tue, Dec 29, 2020 at 1:17 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > > On Wed, Dec 23, 2020 at 04:32:01PM -0500, Rich Persaud wrote: > > On Dec 17, 2020, at 07:13, Jean-Philippe Ouellet <jpo@xxxxxx> wrote: > > > On Wed, Dec 16, 2020 at 2:37 PM Christopher Clark > > > <christopher.w.clark@xxxxxxxxx> wrote: > > >> Hi all, > > >> > > >> I have written a page for the OpenXT wiki describing a proposal for > > >> initial development towards the VirtIO-Argo transport driver, and the > > >> related system components to support it, destined for OpenXT and > > >> upstream projects: > > >> > > >> https://openxt.atlassian.net/wiki/spaces/~cclark/pages/1696169985/VirtIO-Argo+Development+Phase+1 > > Thanks for the detailed document, I've taken a look and there's indeed > a lot of work to do listed there :). I have some suggestion and > questions. > > Overall I think it would be easier for VirtIO to take a new transport > if it's not tied to a specific hypervisor. The way Argo is implemented > right now is using hypercalls, which is a mechanism specific to Xen. > IMO it might be easier to start by having an Argo interface using > MSRs, that all hypervisors can implement, and then base the VirtIO > implementation on top of that interface. It could be presented as a > hypervisor agnostic mediated interface for inter-domain communication > or some such. Thanks - that is an interesting option for a new interface and it would definitely be advantageous to be able to extend the benefits of this approach beyond the Xen hypervisor. I have added it to our planning document to investigate. > That kind of links to a question, has any of this been discussed with > the VirtIO folks, either at OASIS or the Linux kernel? We identified a need within the Automotive Grade Linux community for the ability to enforce access control, and they want to use VirtIO for the usual reasons of standardization and to use the existing pool of available drivers, but there is currently but no good answer for having both, so we put Argo forward in a presentation the AGL Virtualization Experts group in August, and they are discussing it. The slides are available here: https://lists.automotivelinux.org/g/agl-dev-community/attachment/8595/0/Argo%20and%20VirtIO.pdf If you think there's anyone we should invite to the upcoming call on the 14th of January, please let me know off-list. > The document mentions: "Destination: mainline Linux kernel, via the > Xen community" regarding the upstreamability of the VirtIO-Argo > transport driver, but I think this would have to go through the VirtIO > maintainers and not the Xen ones, hence you might want their feedback > quite early to make sure they are OK with the approach taken, and in > turn this might also require OASIS to agree to have a new transport > documented. We're aiming to get requirements within the Xen community first, since there are multiple approaches to VirtIO with Xen ongoing at the moment, but you are right that a design review by the VirtIO community in the near term is important. I think it would be helpful to that process if the Xen community has tried to reach a consensus on the design beforehand. > > > thoughts on how to handle device enumeration? hotplug notifications? > > > - can't rely on xenstore > > > - need some internal argo messaging for this? > > > - name service w/ well-known names? starts to look like xenstore > > > pretty quickly... > > > - granular disaggregation of backend device-model providers desirable > > I'm also curious about this part and I was assuming this would be > done using some kind of Argo messages, but there's no mention in the > document. Would be nice to elaborate a little more about this in the > document. Ack, noted: some further design work is needed on this. > > > how does resource accounting work? each side pays for their own delivery > > > ring? > > > - init in already-guest-mapped mem & simply register? > > > - how does it compare to grant tables? > > > - do you need to go through linux driver to alloc (e.g. xengntalloc) > > > or has way to share arbitrary otherwise not-special userspace pages > > > (e.g. u2mfn, with all its issues (pinning, reloc, etc.))? > > > > > > ioreq is tangled with grant refs, evt chans, generic vmexit > > > dispatcher, instruction decoder, etc. none of which seems desirable if > > > trying to move towards world with strictly safer guest interfaces > > > exposed (e.g. HMX-only) > > I think this needs Christopher's clarification, but it's my > understanding that the Argo transport wouldn't need IOREQs at all, > since all data exchange would be done using the Argo interfaces, there > would be no MMIO emulation or anything similar. The mention about > IOREQs is because the Arm folks are working on using IOREQs in Arm to > enable virtio-mmio on Xen. Yes, that is correct. > Fro my reading of the document, it seem Argo VirtIO would still rely > on event channels, it would IMO be better if instead interrupts are > delivered using a native mechanism, something like MSI delivery by > using a destination APIC ID, vector, delivery mode and trigger mode. Yes, Argo could deliver interrupts via another mechanism rather than event channels; have added this to our planning doc for investigation. https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1 thanks, Christopher
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |