[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Enabling hypervisor agnosticism for VirtIO backends



Hi Stefan,

On Mon, Aug 23, 2021 at 10:58:46AM +0100, Stefan Hajnoczi wrote:
> On Mon, Aug 23, 2021 at 03:25:00PM +0900, AKASHI Takahiro wrote:
> > Hi Stefan,
> > 
> > On Tue, Aug 17, 2021 at 11:41:01AM +0100, Stefan Hajnoczi wrote:
> > > On Wed, Aug 04, 2021 at 12:20:01PM -0700, Stefano Stabellini wrote:
> > > > > Could we consider the kernel internally converting IOREQ messages from
> > > > > the Xen hypervisor to eventfd events? Would this scale with other 
> > > > > kernel
> > > > > hypercall interfaces?
> > > > > 
> > > > > So any thoughts on what directions are worth experimenting with?
> > > >  
> > > > One option we should consider is for each backend to connect to Xen via
> > > > the IOREQ interface. We could generalize the IOREQ interface and make it
> > > > hypervisor agnostic. The interface is really trivial and easy to add.
> > > > The only Xen-specific part is the notification mechanism, which is an
> > > > event channel. If we replaced the event channel with something else the
> > > > interface would be generic. See:
> > > > https://gitlab.com/xen-project/xen/-/blob/staging/xen/include/public/hvm/ioreq.h#L52
> > > 
> > > There have been experiments with something kind of similar in KVM
> > > recently (see struct ioregionfd_cmd):
> > > https://lore.kernel.org/kvm/dad3d025bcf15ece11d9df0ff685e8ab0a4f2edd.1613828727.git.eafanasova@xxxxxxxxx/
> > 
> > Do you know the current status of Elena's work?
> > It was last February that she posted her latest patch
> > and it has not been merged upstream yet.
> 
> Elena worked on this during her Outreachy internship. At the moment no
> one is actively working on the patches.

Does RedHat plan to take over or follow up her work hereafter?
# I'm simply asking from my curiosity.

> > > > There is also another problem. IOREQ is probably not be the only
> > > > interface needed. Have a look at
> > > > https://marc.info/?l=xen-devel&m=162373754705233&w=2. Don't we also need
> > > > an interface for the backend to inject interrupts into the frontend? And
> > > > if the backend requires dynamic memory mappings of frontend pages, then
> > > > we would also need an interface to map/unmap domU pages.
> > > > 
> > > > These interfaces are a lot more problematic than IOREQ: IOREQ is tiny
> > > > and self-contained. It is easy to add anywhere. A new interface to
> > > > inject interrupts or map pages is more difficult to manage because it
> > > > would require changes scattered across the various emulators.
> > > 
> > > Something like ioreq is indeed necessary to implement arbitrary devices,
> > > but if you are willing to restrict yourself to VIRTIO then other
> > > interfaces are possible too because the VIRTIO device model is different
> > > from the general purpose x86 PIO/MMIO that Xen's ioreq seems to support.
> > 
> > Can you please elaborate your thoughts a bit more here?
> > 
> > It seems to me that trapping MMIOs to configuration space and
> > forwarding those events to BE (or device emulation) is a quite
> > straight-forward way to emulate device MMIOs.
> > Or do you think of something of protocols used in vhost-user?
> > 
> > # On the contrary, virtio-ivshmem only requires a driver to explicitly
> > # forward a "write" request of MMIO accesses to BE. But I don't think
> > # it's your point. 
> 
> See my first reply to this email thread about alternative interfaces for
> VIRTIO device emulation. The main thing to note was that although the
> shared memory vring is used by VIRTIO transports today, the device model
> actually allows transports to implement virtqueues differently (e.g.
> making it possible to create a VIRTIO over TCP transport without shared
> memory in the future).

Do you have any example of such use cases or systems?

> It's possible to define a hypercall interface as a new VIRTIO transport
> that provides higher-level virtqueue operations. Doing this is more work
> than using vrings though since existing guest driver and device
> emulation code already supports vrings.

Personally, I'm open to discuss about your point, but

> I don't know the requirements of Stratos so I can't say if creating a
> new hypervisor-independent interface (VIRTIO transport) that doesn't
> rely on shared memory vrings makes sense. I just wanted to raise the
> idea in case you find that VIRTIO's vrings don't meet your requirements.

While I cannot represent the project's view, what the JIRA task
that is assigned to me describes:
  Deliverables
    * Low level library allowing:
    * management of virtio rings and buffers
  [and so on]
So supporting the shared memory-based vring is one of our assumptions.

In my understanding, the goal of Stratos project is that we would
have several VMs congregated into a SoC, yet sharing most of
physical IPs, where the shared memory should be, I assume, the most
efficient transport for virtio.
One of target applications would be automotive, I guess.

Alex and Mike should have more to say here.

-Takahiro Akashi

> Stefan





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.