[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm




On 30.11.20 18:21, Alex Bennée wrote:

Hi Alex

[added missed subject title]

Oleksandr Tyshchenko <olekstysh@xxxxxxxxx> writes:

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>


Date: Sat, 28 Nov 2020 22:33:51 +0200
Subject: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hello all.

The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
You can find an initial discussion at [1] and RFC/V1/V2 series at [2]/[3]/[4].
Xen on Arm requires some implementation to forward guest MMIO access to a device
model in order to implement virtio-mmio backend or even mediator outside of 
hypervisor.
As Xen on x86 already contains required support this series tries to make it 
common
and introduce Arm specific bits plus some new functionality. Patch series is 
based on
Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device 
emulator".
Besides splitting existing IOREQ/DM support and introducing Arm side, the series
also includes virtio-mmio related changes (last 2 patches for toolstack)
for the reviewers to be able to see how the whole picture could look
like.
Thanks for posting the latest version.

According to the initial discussion there are a few open questions/concerns
regarding security, performance in VirtIO solution:
1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
    transport...
I think I'm repeating things here I've said in various ephemeral video
chats over the last few weeks but I should probably put things down on
the record.

I think the original intention of the virtio framers is advanced
features would build on virtio-pci because you get a bunch of things
"for free" - notably enumeration and MSI support. There is assumption
that by the time you add these features to virtio-mmio you end up
re-creating your own less well tested version of virtio-pci. I've not
been terribly convinced by the argument that the guest implementation of
PCI presents a sufficiently large blob of code to make the simpler MMIO
desirable. My attempts to build two virtio kernels (PCI/MMIO) with
otherwise the same devices wasn't terribly conclusive either way.

That said virtio-mmio still has life in it because the cloudy slimmed
down guests moved to using it because the enumeration of PCI is a road
block to their fast boot up requirements. I'm sure they would also
appreciate a MSI implementation to reduce the overhead that handling
notifications currently has on trap-and-emulate.

AIUI for Xen the other downside to PCI is you would have to emulate it
in the hypervisor which would be additional code at the most privileged
level.
Thank you for putting things together here and valuable input. As for me, the "virtio-mmio & MSI solution" as a performance improvement sounds indeed interesting. Flipping through the virtio-mmio links I found a discussion regarding that [1]. I think this needs an additional investigation and experiments, however I am not sure there is an existing infrastructure in Xen on Arm to do so. Once we make some progress with the IOREQ series I would be able to focus on enhancements which we would consider worthwhile.



2. virtio backend is able to access all guest memory, some kind of protection
    is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in
    guest'
This is also an area of interest for Project Stratos and something we
would like to be solved generally for all hypervisors. There is a good
write up of some approaches that Jean Phillipe did on the stratos
mailing list:

   From: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx>
   Subject: Limited memory sharing investigation
   Message-ID: <20201002134336.GA2196245@myrica>

I suspect there is a good argument for the simplicity of a combined
virt queue but it is unlikely to be very performance orientated.

I will look at it.


3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using
    Xenstore in virtio backend if possible.
I wonder how much work it would be for a rust expert to make:

   https://github.com/slp/vhost-user-blk

handle an IOREQ signalling pathway instead of the vhost-user/eventfd
pathway? That would give a good indication on how "hypervisor blind"
these daemons could be made.

<snip>
Please note, build-test passed for the following modes:
1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
Forgive my relative newness to Xen, how do I convince the hypervisor to
build with this on? I've tried variants of:

   make -j9 CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 menuconfig 
XEN_EXPERT=y [CONFIG_|XEN_|_]IOREQ_SERVER=y
CONFIG_IOREQ_SERVER is not protected by CONFIG_XEN_EXPERT. I mentioned how to enable CONFIG_IOREQ_SERVER on Arm (since it is disabled by default within this series) when describing how test this series to Masami, but forgot to add here. Could you apply the one-line patch [2] and rebuild. Sorry for inconvenience.


[1] https://lwn.net/Articles/812055/
[2] https://github.com/otyshchenko1/xen/commit/b371bc9a3c954595bfce01bad244260364bbcd48

--
Regards,

Oleksandr Tyshchenko




  • References:

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.