[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Virtio in Xen on Arm (based on IOREQ concept)



Hi,

On 17/07/2020 19:34, Oleksandr wrote:

On 17.07.20 18:00, Roger Pau Monné wrote:
requires
some implementation to forward guest MMIO access to a device model. And as
it
turned out the Xen on x86 contains most of the pieces to be able to use that transport (via existing IOREQ concept). Julien has already done a big amount
of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
device emulator).
Using that code as a base we managed to create a completely functional PoC
with DomU
running on virtio block device instead of a traditional Xen PV driver
without
modifications to DomU Linux. Our work is mostly about rebasing Julien's
code on the actual
codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
(virtio-disk backend)
in other than Dom0 domain (in our system we have thin Dom0 and keep all
backends
in driver domain),
How do you handle this use-case? Are you using grants in the VirtIO
ring, or rather allowing the driver domain to map all the guest memory
and then placing gfn on the ring like it's commonly done with VirtIO?

Second option. Xen grants are not used at all as well as event channel and Xenbus. That allows us to have guest

*unmodified* which one of the main goals. Yes, this may sound (or even sounds) non-secure, but backend which runs in driver domain is allowed to map all guest memory.

In current backend implementation a part of guest memory is mapped just to process guest request then unmapped back, there is no mappings in advance. The xenforeignmemory_map

call is used for that purpose. For experiment I tried to map all guest memory in advance and just calculated pointer at runtime. Of course that logic performed better.

That works well for a PoC, however I am not sure you can rely on it long term as a guest is free to modify its memory layout. For instance, Linux may balloon in/out memory. You probably want to consider something similar to mapcache in QEMU.

On a similar topic, I am a bit surprised you didn't encounter memory exhaustion when trying to use virtio. Because on how Linux currently works (see XSA-300), the backend domain as to have a least as much RAM as the domain it serves. For instance, you have serve two domains with 1GB of RAM each, then your backend would need at least 2GB + some for its own purpose.

This probably wants to be resolved by allowing foreign mapping to be "paging" out as you would for memory assigned to a userspace.

I was thinking about guest static memory regions and forcing guest to allocate descriptors from them (in order not to map all guest memory, but a predefined region). But that implies modifying guest...

[...]

misc fixes for our use-cases and tool support for the
configuration.
Unfortunately, Julien doesn’t have much time to allocate on the work
anymore,
so we would like to step in and continue.

*A few word about the Xen code:*
You can find the whole Xen series at [5]. The patches are in RFC state
because
some actions in the series should be reconsidered and implemented properly.
Before submitting the final code for the review the first IOREQ patch
(which is quite
big) will be split into x86, Arm and common parts. Please note, x86 part
wasn’t
even build-tested so far and could be broken with that series. Also the
series probably
wants splitting into adding IOREQ on Arm (should be focused first) and
tools support
for the virtio-disk (which is going to be the first Virtio driver)
configuration before going
into the mailing list.
Sending first a patch series to enable IOREQs on Arm seems perfectly
fine, and it doesn't have to come with the VirtIO backend. In fact I
would recommend that you send that ASAP, so that you don't spend time
working on the backend that would likely need to be modified
according to the review received on the IOREQ series.

Completely agree with you, I will send it after splitting IOREQ patch and performing some cleanup.

However, it is going to take some time to make it properly taking into the account

that personally I won't be able to test on x86.
I think other member of the community should be able to help here. However, nowadays testing Xen on x86 is pretty easy with QEMU :).

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.