[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 08/10] iommu: Split iommu_hwdom_init() into arch specific parts



Hi Jan,

On 05/15/2017 09:19 AM, Jan Beulich wrote:
On 15.05.17 at 09:42, <julien.grall@xxxxxxx> wrote:
On 15/05/2017 08:20, Jan Beulich wrote:
Having thought about this some more, what's still missing is a
clear explanation why this new need of a non-stub mfn_to_gmfn()
isn't finally enough of a reason to introduce an M2P on ARM. We
currently have several uses already which ARM fakes one way or
another:
- gnttab_shared_gmfn()

This does not use mfn_to_gmfn on ARM.

Right, at the price of maintaining some other helper data.

saving few MB of memory in small board and hundreds in server if we use an M2P. The choice is very easy here.


- gnttab_status_gmfn()

gnttab_status_gmfn() returns 0 so far. I have to look at this one.

- memory_exchange()

Memory exchange does not work on ARM today and will require more work
than that. When I looked at the code a couple of years ago, it was
possible to drop the call to mfn_to_gmfn().

- getdomaininfo()

We could rework to store the gmfn in arch_domain.

Which again would mean you maintain extra data in order to avoid
the more general M2P.

Yes saving MBs as above.


With this I think there's quite a bit of justification needed to keep
going without M2P on ARM.

As said in a previous thread, I made a quick calculation, ARM32 supports
up 40-bit PA and IPA (e.g guest address), which means 28-bits of
MFN/GFN. The GFN would have to be stored in a 32-bit for alignment, so
we would need 2^28 * 4 = 1GiB of virtual address space and potentially
physical memory. We don't have 1GB of VA space free on 32-bit right now.

How come? You don't share address spaces with guests.

Below the layout for ARM32:


 *   0  -  12M   <COMMON>
 *
 *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
 * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
 *                    space
 *
 *   1G -   2G   Xenheap: always-mapped memory
 *   2G -   4G   Domheap: on-demand-mapped
 *


ARM64 currently supports up to 48-bit PA and 48-bit IPA, which means
36-bits of MFN/GFN. The GFN would have to be stored in 64-bit for
alignment, so we would need 2^36 * 8 = 512GiB of virtual address space
and potentially physical memory. While virtual address space is not a
problem, the memory is a problem for embedded platform. We want Xen to
be as lean as possible.

You don't need to allocate all 512Gb, the table can be as sparse as
present memory permits.

I am aware about that... The main point is reducing the footprint of Xen. Assuming you have a 8GB board, you would have to use 16MB for the M2P.

Likely this will increase the footprint of Xen ARM. For what benefits? Avoiding to store few byte in a non-generic way when we need it...

I will comment about the IOMMU below.


So the M2P is not a solution on ARM. A better approach is to drop those
calls from common code and replace by something different (as we did for
gnttab_shared_mfn).

I'm of the exact opposite opinion. Or at the very least, have a mode
(read: config or command line option) where ARM maintains M2P and
make features like the IOMMU one here depend on being in that mode.

Well, in embedded platform you will know in advance that you will passthrough devices to a guest. So there is no point of deferring the creation of the page-tables until a device is been assigned.

In server side, I would expect page-table to be shared most of the time. We might have to unshare some part of the page-tables but not everything as it is currently done on x86.

So far, you didn't convince me the M2P is the right solution for ARM. We use more memory for benefits of, AFAICT, of device hotpluging (?) and been "generic".

Anyway, I will let Stefano give his opinion on it.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.