[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v13] This is the ABI for the two halves of a para-virtualized sound driver to communicate with each to other.



Hi Oleksandr,

On 28/11/16 14:56, Oleksandr Andrushchenko wrote:
On 11/28/2016 04:24 PM, Julien Grall wrote:
Hi Oleksandr,

On 28/11/16 14:12, Oleksandr Andrushchenko wrote:

On 11/28/2016 03:27 PM, Jan Beulich wrote:
+ *
+ * gref_dir_next_page - grant_ref_t, reference to the next page
describing
+ *   page directory. Must be 0 if no more pages in the list.

If I am not mistaken 0 is a valid grant.

Then I will remove this sentence, anyways BE knows how many grefs there
are for the buffer size given
+ * gref[i] - grant_ref_t, reference to a shared page of the buffer
+ *   allocated at XENSND_OP_OPEN
+ *
+ * Number of grant_ref_t entries in the whole page directory is not
+ * passed, but instead can be calculated as:
+ *   num_grefs_total = DIV_ROUND_UP(XENSND_OP_OPEN.buffer_sz,
PAGE_SIZE);
The header should be self contained, and there's no DIV_ROUND_UP()
anywhere under public/io/ for a reader to refer to. Please express this
using mathematical terms plus, if needed, standard C library ones.
done, will put:
num_grefs_total = (XENSND_OP_OPEN.buffer_sz + PAGE_SIZE - 1) / PAGE_SIZE

Can we avoid to use PAGE_SIZE in the header? Xen, the front-end and
the back-end may have different page size.

then, I believe, the protocol should implement something like blkif does?
Multi-page buffer which depends on front and back page sizes?
(blkif: max-ring-page-order/ring-ref%u/ring-page-order)
Is this what you mean?

It is not what I meant. I asked to define PAGE_SIZE. Is it the the PAGE_SIZE of Xen? PAGE_SIZE of the backend? PAGE_SIZE of the frontend?

Currently PV driver PAGE_SIZE is based on the size of a grant page.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.