[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 13/25] argo: implement the register op
Hi Christoffer, On 05/12/2018 22:35, Christopher Clark wrote: On Wed, Dec 5, 2018 at 9:20 AM Julien Grall <julien.grall@xxxxxxx> wrote:On 04/12/2018 09:08, Christopher Clark wrote:On Sun, Dec 2, 2018 at 12:11 PM Julien Grall <Julien.Grall@xxxxxxx> wrote:On 01/12/2018 01:32, Christopher Clark wrote:diff --git a/xen/include/public/argo.h b/xen/include/public/argo.h ... +/* pfn type: 64-bit on all architectures to aid avoiding a compat ABI */ +typedef uint64_t argo_pfn_t;As you always use 64-bit, can we just use an address? This would make the ABI agnostic to the hypervisor page granularity.By address I meant guest physical address (and not guest virtual address). Arm processors may support multiple page granularity (4KB, 16KB, 64KB). The software is allowed to use different granularity at different level. This means that the hypervisor could use 4KB page while the guest kernel would use 64KB page (and vice versa). Some distros made the choice to only support one type of page granularity (i.e 64KB for REHL, 4KB for Debian...). At the moment the hypercall interface is based on the hypervisor page granularity. Because Xen has always supported 4KB page-granularity, this assumption was also hardcoded in the kernel. What prevent us to get 64KB page support in Xen (and therefore support for 52-bit address) is the hypercall ABI. If you upgrade Xen to 64KB then the hypercall interface would defact use 64KB frame. This would break any current guest. It is also not possible to keep 4KB pages everywhere because you can only map 64KB in Xen. So you may map a bit too much from another guest. This makes me think that the frame is probably not the best in that situation. Instead a pair of address/size would be more suitable. The problem is much larger than this series. But I thought I would attempt to convince the community using guest physical address over guest frame address whenever it is possible.Thanks, Julien -- that explanation is very helpful and your request makes sense. So in concrete terms, with the change that you're advocating for to this patch, the 64-bit value that is supplied by the guest in the array passed as an argument to register_ring would encode the same guest physical frame number as it currently does in the patch version presented in this thread, but it would be bit-shifted to the position used in a physical address. In addition to that change, a page size indicator would be supplied too -- for every page address supplied in the call. Is there a method currently used within Xen (or relevant places elsewhere) for encoding both the page address and size (ie. 4KB, 16KB or 64KB) within the same 64-bits? ie. Knowing that the smallest granularity of page is 4KB, and that all pages are aligned to at least a 4KB boundary, there are low bits in the address that are known to be zero, and those could be used to indicate the page size when supplied to this call. It seems like such an encoding would allow for avoiding doubling the size of the argument array, but I'm not sure how inconvenient it would be to work with in practice. If so, such an interface change looks manageable and hopefully it would be acceptable to only support 4KB pages in the current implementation behind that new ABI for the time being. Let me know what you think. If you let the user the choice of the granularity, then, I believe, you will prevent the hypervisor to do some optimization. For instance, if the guest supplies only 4KB page but the hypervisor is 64KB. There are no way to easily map them contiguously in the hypervisor (e.g using vmap). Is there a particular reason to allow the ring buffer to be non-contiguous in the guest physical address? Depending on the answer, there are different way to handle that:1) Request the guest to allocate memory using 64KB (on Arm) chunk and pass the base address for each chunk 2) Request the guest to allocate contiguously the buffer and pass the base address and size Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |