[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 5/9] xen/gntdev: Allow mappings for DMA buffers
On 06/11/2018 08:46 PM, Julien Grall wrote: Hi, On 06/11/2018 06:16 PM, Oleksandr Andrushchenko wrote:On 06/11/2018 07:51 PM, Stefano Stabellini wrote:Yes, I just mean I can add something like [1] as a separate patch to the series,On Mon, 11 Jun 2018, Oleksandr Andrushchenko wrote:On 06/08/2018 10:21 PM, Boris Ostrovsky wrote:On 06/08/2018 01:59 PM, Stefano Stabellini wrote:@@ -325,6 +401,14 @@ static int map_grant_pages(struct grant_map *map) map->unmap_ops[i].handle = map->map_ops[i].handle; if (use_ptemod) map->kunmap_ops[i].handle = map->kmap_ops[i].handle; +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC + else if (map->dma_vaddr) { + unsigned long mfn; + + mfn = __pfn_to_mfn(page_to_pfn(map->pages[i]));Not pfn_to_mfn()?I'd love to, but pfn_to_mfn is only defined for x86, not ARM: [1] and [2] Thus, drivers/xen/gntdev.c:408:10: error: implicit declaration of function ‘pfn_to_mfn’ [-Werror=implicit-function-declaration] mfn = pfn_to_mfn(page_to_pfn(map->pages[i])); So, I'll keep __pfn_to_mfnHow will this work on non-PV x86?So, you mean I need: #ifdef CONFIG_X86 mfn = pfn_to_mfn(page_to_pfn(map->pages[i])); #else mfn = __pfn_to_mfn(page_to_pfn(map->pages[i])); #endifI'd rather fix it in ARM code. Stefano, why does ARM uses the underscored version?Do you want me to add one more patch for ARM to wrap __pfn_to_mfn with static inline for ARM? e.g. static inline ...pfn_to_mfn(...) { __pfn_to_mfn(); }A Xen on ARM guest doesn't actually know the mfns behind its own pseudo-physical pages. This is why we stopped using pfn_to_mfn and started using pfn_to_bfn instead, which will generally return "pfn", unless the page is a foreign grant. See include/xen/arm/page.h. pfn_to_bfn was also introduced on x86. For example, see the usage ofpfn_to_bfn in drivers/xen/swiotlb-xen.c. Otherwise, if you don't careabout other mapped grants, you can just use pfn_to_gfn, that always returns pfn.I think then this code needs to use pfn_to_bfn().OkAlso, for your information, we support different page granularities inLinux as a Xen guest, see the comment at include/xen/arm/page.h: /** The pseudo-physical frame (pfn) used in all the helpers is alwaysbased * on Xen page granularity (i.e 4KB). ** A Linux page may be split across multiple non-contiguous Xen page sowe * have to keep track with frame based on 4KB page granularity. * * PV drivers should never make a direct usage of those helpers (particularly * pfn_to_gfn and gfn_to_pfn). */A Linux page could be 64K, but a Xen page is always 4K. A granted pageis also 4K. We have helpers to take into account the offsets to map multiple Xen grants in a single Linux page, see for example drivers/xen/grant-table.c:gnttab_foreach_grant. Most PV drivers have been converted to be able to work with 64K pages correctly, but if Iremember correctly gntdev.c is the only remaining driver that doesn't support 64K pages yet, so you don't have to deal with it if you don'twant to.I believe somewhere in this series there is a test for PAGE_SIZE vs. XEN_PAGE_SIZE. Right, Oleksandr?Not in gntdev. You might have seen this in xen-drmfront/xen-sndfront,but I didn't touch gntdev for that. Do you want me to add yet another patchin the series to check for that?gntdev.c is already not capable of handling PAGE_SIZE != XEN_PAGE_SIZE,so you are not going to break anything that is not already broken :-) If your new gntdev.c code relies on PAGE_SIZE == XEN_PAGE_SIZE, it might begood to add an in-code comment about it, just to make it easier to fix the whole of gntdev.c in the future.so we are on the safe side hereSee my comment on Stefano's e-mail. I believe gntdev is able to handle PAGE_SIZE != XEN_PAGE_SIZE. So I would rather keep the behavior we have today for such case. Sure, with a note that we waste most of a 64KiB page ;) Cheers, _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |