[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Understanding osdep_xenforeignmemory_map mmap behaviour



On 24.08.22 11:19, Viresh Kumar wrote:
On 24-03-22, 06:12, Juergen Gross wrote:
For a rather long time we were using "normal" user pages for this purpose,
which were just locked into memory for doing the hypercall.

Unfortunately there have been very rare problems with that approach, as
the Linux kernel can set a user page related PTE to invalid for short
periods of time, which led to EFAULT in the hypervisor when trying to
access the hypercall data.

In Linux this can avoided only by using kernel memory, which is the
reason why the hypercall buffers are allocated and mmap()-ed through the
privcmd driver.

Hi Juergen,

I understand why we moved from user pages to kernel pages, but I don't
fully understand why we need to make two separate calls to map the
guest memory, i.e. mmap() followed by ioctl(IOCTL_PRIVCMD_MMAPBATCH).

Why aren't we doing all of it from mmap() itself ? I hacked it up to
check on it and it works fine if we do it all from mmap() itself.

Hypercall buffers are needed for more than just the "MMAPBATCH" hypercall.
Or are you suggesting one device per possible hypercall?

Aren't we abusing the Linux userspace ABI here ? As standard userspace
code would expect just mmap() to be enough to map the memory. Yes, the
current user, Xen itself, is adapted to make two calls, but it breaks
as soon as we want to use something that relies on Linux userspace
ABI.

I think you are still mixing up the hypercall buffers with the memory
you want to map via the hypercall. At least the reference to kernel
memory above is suggesting that.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.