[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Virtio in Xen on Arm (based on IOREQ concept)

On 22/07/2020 12:10, Roger Pau Monné wrote:
On Wed, Jul 22, 2020 at 11:47:18AM +0100, Julien Grall wrote:

You can still use the map-on-fault behaviour as above, but I would
recommend that you try to limit the number of hypercalls issued.
Having to issue a single hypercall for each page fault it's going to
be slow, so I would instead use mmap batch to map the hole range in
unpopulated physical memory and then the OS fault handler just needs to
fill the page tables with the corresponding address.
IIUC your proposal, you are assuming that you will have enough free space in
the physical address space to map the foreign mapping.

However that amount of free space is not unlimited and may be quite small
(see above). It would be fairly easy to exhaust it given that a userspace
application can map many times the same guest physical address.

So I still think we need to be able to allow Linux to swap a foreign page
with another page.

Right, but you will have to be careful to make sure physical addresses
are not swapped while being used for IO with devices, as in that case
you won't get a recoverable fault. This is safe now because physical
mappings created by privcmd are never swapped out, but if you go the
route you propose you will have to figure a way to correctly populate
physical ranges used for IO with devices, even when the CPU hasn't
accessed them.

Relying solely on CPU page faults to populate them will not be enough,
as the CPU won't necessarily access all the pages that would be send
to devices for IO.

The problem you described here doesn't seem to be specific to foreign mapping. So I would really be surprised if Linux doesn't already have generic mechanism to deal with this.

Hence why I suggested before to deal with foreign mapping the same way as Linux would do with user memory.


Julien Grall



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.