[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re:
On Mon, Nov 30, 2020 at 04:21:59PM +0000, Alex Bennée wrote: > > Oleksandr Tyshchenko <olekstysh@xxxxxxxxx> writes: > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> > > > > > > Date: Sat, 28 Nov 2020 22:33:51 +0200 > > Subject: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm > > MIME-Version: 1.0 > > Content-Type: text/plain; charset=UTF-8 > > Content-Transfer-Encoding: 8bit > > > > Hello all. > > > > The purpose of this patch series is to add IOREQ/DM support to Xen on Arm. > > You can find an initial discussion at [1] and RFC/V1/V2 series at > > [2]/[3]/[4]. > > Xen on Arm requires some implementation to forward guest MMIO access to a > > device > > model in order to implement virtio-mmio backend or even mediator outside of > > hypervisor. > > As Xen on x86 already contains required support this series tries to make > > it common > > and introduce Arm specific bits plus some new functionality. Patch series > > is based on > > Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device > > emulator". > > Besides splitting existing IOREQ/DM support and introducing Arm side, the > > series > > also includes virtio-mmio related changes (last 2 patches for toolstack) > > for the reviewers to be able to see how the whole picture could look > > like. > > Thanks for posting the latest version. > > > > > According to the initial discussion there are a few open questions/concerns > > regarding security, performance in VirtIO solution: > > 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require > > different > > transport... > > I think I'm repeating things here I've said in various ephemeral video > chats over the last few weeks but I should probably put things down on > the record. > > I think the original intention of the virtio framers is advanced > features would build on virtio-pci because you get a bunch of things > "for free" - notably enumeration and MSI support. There is assumption > that by the time you add these features to virtio-mmio you end up > re-creating your own less well tested version of virtio-pci. I've not > been terribly convinced by the argument that the guest implementation of > PCI presents a sufficiently large blob of code to make the simpler MMIO > desirable. My attempts to build two virtio kernels (PCI/MMIO) with > otherwise the same devices wasn't terribly conclusive either way. > > That said virtio-mmio still has life in it because the cloudy slimmed > down guests moved to using it because the enumeration of PCI is a road > block to their fast boot up requirements. I'm sure they would also > appreciate a MSI implementation to reduce the overhead that handling > notifications currently has on trap-and-emulate. > > AIUI for Xen the other downside to PCI is you would have to emulate it > in the hypervisor which would be additional code at the most privileged > level. Xen already emulates (or maybe it would be better to say decodes) PCI accesses on the hypervisor and forwards them to the appropriate device model using the IOREQ interface, so that's not something new. It's not really emulating the PCI config space, but just detecting accesses and forwarding them to the device model that should handle them. You can register different emulators in user space that handle accesses to different PCI devices from a guest. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |