[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 1/3] xen/balloon: Allow allocating DMA buffers



On 05/21/2018 09:53 PM, Boris Ostrovsky wrote:
On 05/21/2018 01:32 PM, Oleksandr Andrushchenko wrote:
On 05/21/2018 07:35 PM, Boris Ostrovsky wrote:
On 05/21/2018 01:40 AM, Oleksandr Andrushchenko wrote:
On 05/19/2018 01:04 AM, Boris Ostrovsky wrote:
On 05/17/2018 04:26 AM, Oleksandr Andrushchenko wrote:
From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
A commit message would be useful.
Sure, v1 will have it
Signed-off-by: Oleksandr Andrushchenko
<oleksandr_andrushchenko@xxxxxxxx>

        for (i = 0; i < nr_pages; i++) {
-        page = alloc_page(gfp);
-        if (page == NULL) {
-            nr_pages = i;
-            state = BP_EAGAIN;
-            break;
+        if (ext_pages) {
+            page = ext_pages[i];
+        } else {
+            page = alloc_page(gfp);
+            if (page == NULL) {
+                nr_pages = i;
+                state = BP_EAGAIN;
+                break;
+            }
            }
            scrub_page(page);
            list_add(&page->lru, &pages);
@@ -529,7 +565,7 @@ static enum bp_state
decrease_reservation(unsigned long nr_pages, gfp_t gfp)
        i = 0;
        list_for_each_entry_safe(page, tmp, &pages, lru) {
            /* XENMEM_decrease_reservation requires a GFN */
-        frame_list[i++] = xen_page_to_gfn(page);
+        frames[i++] = xen_page_to_gfn(page);
      #ifdef CONFIG_XEN_HAVE_PVMMU
            /*
@@ -552,18 +588,22 @@ static enum bp_state
decrease_reservation(unsigned long nr_pages, gfp_t gfp)
    #endif
            list_del(&page->lru);
    -        balloon_append(page);
+        if (!ext_pages)
+            balloon_append(page);
So what you are proposing is not really ballooning. You are just
piggybacking on existing interfaces, aren't you?
Sort of. Basically I need to {increase|decrease}_reservation, not
actually
allocating ballooned pages.
Do you think I can simply EXPORT_SYMBOL for
{increase|decrease}_reservation?
Any other suggestion?
I am actually wondering how much of that code you end up reusing. You
pretty much create new code paths in both routines and common code ends
up being essentially the hypercall.
Well, I hoped that it would be easier to maintain if I modify existing
code
to support both use-cases, but I am also ok to create new routines if
this
seems to be reasonable - please let me know
   So the question is --- would it make
sense to do all of this separately from the balloon driver?
This can be done, but which driver will host this code then? If we
move from
the balloon driver, then this could go to either gntdev or grant-table.
What's your preference?
A separate module?

Is there any use for this feature outside of your zero-copy DRM driver?
Intel's hyper dma-buf (Dongwon/Matt CC'ed), V4L/GPU at least.

At the time I tried to upstream zcopy driver it was discussed and decided that it would be better if I remove all DRM specific code and move it to Xen drivers.
Thus, this RFC.

But it can also be implemented as a dedicated Xen dma-buf driver which will have all the
code from this RFC + a bit more (char/misc device handling at least).
This will also require a dedicated user-space library, just like libxengnttab.so
for gntdev (now I have all new IOCTLs covered there).

If the idea of a dedicated Xen dma-buf driver seems to be more attractive we
can work toward this solution. BTW, I do support this idea, but was not
sure if Xen community accepts yet another driver which duplicates quite some code of the existing gntdev/balloon/grant-table. And now after this RFC I hope that all cons and pros of both dedicated driver and gntdev/balloon/grant-table extension are
clearly seen and we can make a decision.


-boris
Thank you,
Oleksandr
[1] https://lists.freedesktop.org/archives/dri-devel/2018-April/173163.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.