[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 5/9] xen/gntdev: Allow mappings for DMA buffers



Hi,

On 06/11/2018 06:16 PM, Oleksandr Andrushchenko wrote:
On 06/11/2018 07:51 PM, Stefano Stabellini wrote:
On Mon, 11 Jun 2018, Oleksandr Andrushchenko wrote:
On 06/08/2018 10:21 PM, Boris Ostrovsky wrote:
On 06/08/2018 01:59 PM, Stefano Stabellini wrote:
      @@ -325,6 +401,14 @@ static int map_grant_pages(struct
grant_map
*map)
              map->unmap_ops[i].handle =
map->map_ops[i].handle;
              if (use_ptemod)
                  map->kunmap_ops[i].handle =
map->kmap_ops[i].handle;
+#ifdef CONFIG_XEN_GRANT_DMA_ALLOC
+        else if (map->dma_vaddr) {
+            unsigned long mfn;
+
+            mfn = __pfn_to_mfn(page_to_pfn(map->pages[i]));
Not pfn_to_mfn()?
I'd love to, but pfn_to_mfn is only defined for x86, not ARM:
[1]
and [2]
Thus,

drivers/xen/gntdev.c:408:10: error: implicit declaration of
function
‘pfn_to_mfn’ [-Werror=implicit-function-declaration]
        mfn = pfn_to_mfn(page_to_pfn(map->pages[i]));

So, I'll keep __pfn_to_mfn
How will this work on non-PV x86?
So, you mean I need:
#ifdef CONFIG_X86
mfn = pfn_to_mfn(page_to_pfn(map->pages[i]));
#else
mfn = __pfn_to_mfn(page_to_pfn(map->pages[i]));
#endif

I'd rather fix it in ARM code. Stefano, why does ARM uses the
underscored version?
Do you want me to add one more patch for ARM to wrap __pfn_to_mfn
with static inline for ARM? e.g.
static inline ...pfn_to_mfn(...)
{
      __pfn_to_mfn();
}
A Xen on ARM guest doesn't actually know the mfns behind its own
pseudo-physical pages. This is why we stopped using pfn_to_mfn and
started using pfn_to_bfn instead, which will generally return "pfn",
unless the page is a foreign grant. See include/xen/arm/page.h.
pfn_to_bfn was also introduced on x86. For example, see the usage of
pfn_to_bfn in drivers/xen/swiotlb-xen.c. Otherwise, if you don't care
about other mapped grants, you can just use pfn_to_gfn, that always
returns pfn.
I think then this code needs to use pfn_to_bfn().
Ok

Also, for your information, we support different page granularities in
Linux as a Xen guest, see the comment at include/xen/arm/page.h:

    /*
     * The pseudo-physical frame (pfn) used in all the helpers is always
based
     * on Xen page granularity (i.e 4KB).
     *
     * A Linux page may be split across multiple non-contiguous Xen page so
we
     * have to keep track with frame based on 4KB page granularity.
     *
     * PV drivers should never make a direct usage of those helpers
(particularly
     * pfn_to_gfn and gfn_to_pfn).
     */

A Linux page could be 64K, but a Xen page is always 4K. A granted page
is also 4K. We have helpers to take into account the offsets to map
multiple Xen grants in a single Linux page, see for example
drivers/xen/grant-table.c:gnttab_foreach_grant. Most PV drivers have
been converted to be able to work with 64K pages correctly, but if I
remember correctly gntdev.c is the only remaining driver that doesn't
support 64K pages yet, so you don't have to deal with it if you don't
want to.
I believe somewhere in this series there is a test for PAGE_SIZE vs.
XEN_PAGE_SIZE. Right, Oleksandr?
Not in gntdev. You might have seen this in xen-drmfront/xen-sndfront,
but I didn't touch gntdev for that. Do you want me to add yet another patch
in the series to check for that?
gntdev.c is already not capable of handling PAGE_SIZE != XEN_PAGE_SIZE,
so you are not going to break anything that is not already broken :-) If
your new gntdev.c code relies on PAGE_SIZE == XEN_PAGE_SIZE, it might be
good to add an in-code comment about it, just to make it easier to fix
the whole of gntdev.c in the future.

Yes, I just mean I can add something like [1] as a separate patch to the series,
so we are on the safe side here

See my comment on Stefano's e-mail. I believe gntdev is able to handle PAGE_SIZE != XEN_PAGE_SIZE. So I would rather keep the behavior we have today for such case.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.