[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Virtio in Xen on Arm (based on IOREQ concept)




On 21.07.20 17:27, Julien Grall wrote:
Hi,

Hello Julien



On 17/07/2020 19:34, Oleksandr wrote:

On 17.07.20 18:00, Roger Pau Monné wrote:
requires
some implementation to forward guest MMIO access to a device model. And as
it
turned out the Xen on x86 contains most of the pieces to be able to use that transport (via existing IOREQ concept). Julien has already done a big amount
of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
device emulator).
Using that code as a base we managed to create a completely functional PoC
with DomU
running on virtio block device instead of a traditional Xen PV driver
without
modifications to DomU Linux. Our work is mostly about rebasing Julien's
code on the actual
codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
(virtio-disk backend)
in other than Dom0 domain (in our system we have thin Dom0 and keep all
backends
in driver domain),
How do you handle this use-case? Are you using grants in the VirtIO
ring, or rather allowing the driver domain to map all the guest memory
and then placing gfn on the ring like it's commonly done with VirtIO?

Second option. Xen grants are not used at all as well as event channel and Xenbus. That allows us to have guest

*unmodified* which one of the main goals. Yes, this may sound (or even sounds) non-secure, but backend which runs in driver domain is allowed to map all guest memory.

In current backend implementation a part of guest memory is mapped just to process guest request then unmapped back, there is no mappings in advance. The xenforeignmemory_map

call is used for that purpose. For experiment I tried to map all guest memory in advance and just calculated pointer at runtime. Of course that logic performed better.

That works well for a PoC, however I am not sure you can rely on it long term as a guest is free to modify its memory layout. For instance, Linux may balloon in/out memory. You probably want to consider something similar to mapcache in QEMU.
Yes, that was considered and even tried.
Current backend implementation is based on map/unmap only needed part of guest memory per each request with some kind of mapcache. I borrowed x86 logic on Arm to invalidate mapcache on XENMEM_decrease_reservation call, so if mapcache is in use it will be cleared. Hopefully DomU without backends running is not going to balloon in/out memory often.



On a similar topic, I am a bit surprised you didn't encounter memory exhaustion when trying to use virtio. Because on how Linux currently works (see XSA-300), the backend domain as to have a least as much RAM as the domain it serves. For instance, you have serve two domains with 1GB of RAM each, then your backend would need at least 2GB + some for its own purpose.
I understand these bits. You have already warned me about that. When playing with mapping the whole guest memory in advance, I gave a DomU 512MB only, that was enough to not encounter memory exhaustion on my
environment. Then switched to "map/unmap at runtime" model.



*A few word about the Xen code:*
You can find the whole Xen series at [5]. The patches are in RFC state
because
some actions in the series should be reconsidered and implemented properly.
Before submitting the final code for the review the first IOREQ patch
(which is quite
big) will be split into x86, Arm and common parts. Please note, x86 part
wasn’t
even build-tested so far and could be broken with that series. Also the
series probably
wants splitting into adding IOREQ on Arm (should be focused first) and
tools support
for the virtio-disk (which is going to be the first Virtio driver)
configuration before going
into the mailing list.
Sending first a patch series to enable IOREQs on Arm seems perfectly
fine, and it doesn't have to come with the VirtIO backend. In fact I
would recommend that you send that ASAP, so that you don't spend time
working on the backend that would likely need to be modified
according to the review received on the IOREQ series.

Completely agree with you, I will send it after splitting IOREQ patch and performing some cleanup.

However, it is going to take some time to make it properly taking into the account

that personally I won't be able to test on x86.
I think other member of the community should be able to help here. However, nowadays testing Xen on x86 is pretty easy with QEMU :).

That's good.


--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.