[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] stubdom: foreignmemory: Fix build after 0dbb4be739c5
On 13.07.2021 18:15, Julien Grall wrote: > On 13/07/2021 16:52, Jan Beulich wrote: >> On 13.07.2021 16:33, Julien Grall wrote: >>> On 13/07/2021 15:23, Jan Beulich wrote: >>>> On 13.07.2021 16:19, Julien Grall wrote: >>>>> On 13/07/2021 15:14, Jan Beulich wrote: >>>>>>> And I don't think it should be named XC_PAGE_*, but rather XEN_PAGE_*. >>>>>> >>>>>> Even that doesn't seem right to me, at least in principle. There >>>>>> shouldn't >>>>>> be a build time setting when it may vary at runtime. IOW on Arm I think a >>>>>> runtime query to the hypervisor would be needed instead. >>>>> >>>>> Yes, we want to be able to use the same userspace/OS without rebuilding >>>>> to a specific hypervisor page size. >>>>> >>>>>> And thinking >>>>>> even more generally, perhaps there could also be mixed (base) page sizes >>>>>> in use at run time, so it may need to be a bit mask which gets returned. >>>>> >>>>> I am not sure to understand this. Are you saying the hypervisor may use >>>>> at the same time different page size? >>>> >>>> I think so, yes. And I further think the hypervisor could even allow its >>>> guests to do so. >>> >>> This is already the case on Arm. We need to differentiate between the >>> page size used by the guest and the one used by Xen for the stage-2 page >>> table (what you call EPT on x86). >>> >>> In this case, we are talking about the page size used by the hypervisor >>> to configure the stage-2 page table >>> >>>> There would be a distinction between the granularity at >>>> which RAM gets allocated and the granularity at which page mappings (RAM >>>> or other) can be established. Which yields an environment which I'd say >>>> has no clear "system page size". >>> >>> I don't quite understand why you would allocate and etablish the memory >>> with a different page size in the hypervisor. Can you give an example? >> >> Pages may get allocated in 16k chunks, but there may be ways to map >> 4k MMIO regions, 4k grants, etc. Due to the 16k allocation granularity >> you'd e.g. still balloon pages in and out at 16k granularity. > Right, 16KB is a multiple of 4KB, so a guest could say "Please allocate > a contiguous chunk of 4 4KB pages". > > From my understanding, you are suggesting to tell the guest that we > "support 4KB, 16KB, 64KB...". However, it should be sufficient to say > "we support 4KB and all its multiple". No - in this case it could legitimately expect to be able to balloon out a single 4k page. Yet that's not possible with 16k allocation granularity. Jan > For hypervisor configured with 16KB (or 64KB) as the smaller page > granularity, then we would say "we support 16KB (resp. 64KB) and all its > multiple". > > So the only thing we need is a way to query the small page granularity > supported. This could be a shift, size, whatever... > > If the guest is supporting a small page granularity, then the guest > would need to make sure to adapt the balloning, grants... so they are at > least a multiple of the page granularity supported by the hypervisor. > > Cheers, >
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |