[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] stubdom: foreignmemory: Fix build after 0dbb4be739c5

On 14/07/2021 07:11, Jan Beulich wrote:
On 13.07.2021 18:33, Julien Grall wrote:

On 13/07/2021 17:27, Jan Beulich wrote:
On 13.07.2021 18:15, Julien Grall wrote:
On 13/07/2021 16:52, Jan Beulich wrote:
On 13.07.2021 16:33, Julien Grall wrote:
On 13/07/2021 15:23, Jan Beulich wrote:
On 13.07.2021 16:19, Julien Grall wrote:
On 13/07/2021 15:14, Jan Beulich wrote:
And I don't think it should be named XC_PAGE_*, but rather XEN_PAGE_*.

Even that doesn't seem right to me, at least in principle. There shouldn't
be a build time setting when it may vary at runtime. IOW on Arm I think a
runtime query to the hypervisor would be needed instead.

Yes, we want to be able to use the same userspace/OS without rebuilding
to a specific hypervisor page size.

And thinking
even more generally, perhaps there could also be mixed (base) page sizes
in use at run time, so it may need to be a bit mask which gets returned.

I am not sure to understand this. Are you saying the hypervisor may use
at the same time different page size?

I think so, yes. And I further think the hypervisor could even allow its
guests to do so.

This is already the case on Arm. We need to differentiate between the
page size used by the guest and the one used by Xen for the stage-2 page
table (what you call EPT on x86).

In this case, we are talking about the page size used by the hypervisor
to configure the stage-2 page table

There would be a distinction between the granularity at
which RAM gets allocated and the granularity at which page mappings (RAM
or other) can be established. Which yields an environment which I'd say
has no clear "system page size".

I don't quite understand why you would allocate and etablish the memory
with a different page size in the hypervisor. Can you give an example?

Pages may get allocated in 16k chunks, but there may be ways to map
4k MMIO regions, 4k grants, etc. Due to the 16k allocation granularity
you'd e.g. still balloon pages in and out at 16k granularity.
Right, 16KB is a multiple of 4KB, so a guest could say "Please allocate
a contiguous chunk of 4 4KB pages".

   From my understanding, you are suggesting to tell the guest that we
"support 4KB, 16KB, 64KB...". However, it should be sufficient to say
"we support 4KB and all its multiple".

No - in this case it could legitimately expect to be able to balloon
out a single 4k page. Yet that's not possible with 16k allocation

I am confused... why would you want to put such restriction? IOW, what
are you trying to protect against?

Protect? It may simply be that the most efficient page size is 16k.
Hence accounting of pages may be done at 16k granularity.

I am assuming you are speaking about accounting in the hypervisor. So...

IOW there
then is one struct page_info per 16k page. How would you propose a
guest would alloc/free 4k pages in such a configuration?
... the hypercall interface would be using 16KB page granularity as a base.

But IIUC, you are thinking to also allow mapping to be done with 4KB. I think from the hypercall interface, this should be considered as a subpage.

I am not entirely convinced the subpage size should be exposed in a generic hypercall query because only a subset will support it. If all were supporting, the base granularity would be the subpage granularity rendering the discussion moot....

Anyway, we can discuss that when there is a formal proposal on the ML.


Julien Grall



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.