[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] sndif: add explicit back and front synchronization
> * +----------------+----------------+----------------+----------------+ > * | gref_directory | 24 > * +----------------+----------------+----------------+----------------+ > - * | reserved | 28 > - * +----------------+----------------+----------------+----------------+ > - * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| > + * | period_sz | 28 > * +----------------+----------------+----------------+----------------+ > * | reserved | 32 > * +----------------+----------------+----------------+----------------+ > @@ -578,6 +616,14 @@ > * pcm_channels - uint8_t, number of channels of this stream, > * [channels-min; channels-max] > * buffer_sz - uint32_t, buffer size to be allocated, octets > + * period_sz - uint32_t, recommended event period size, octets > + * This is the recommended (hint) value of the period at which frontend > would > + * like to receive XENSND_EVT_CUR_POS notifications from the backend when > + * stream position advances during playback/capture. > + * It shows how many octets are expected to be played/captured before > + * sending such an event. > + * If set to 0 no XENSND_EVT_CUR_POS events are sent by the backend. > + * I would gate this based on the version. That is if version 0 then this field does not exist. > * gref_directory - grant_ref_t, a reference to the first shared page > * describing shared buffer references. At least one page exists. If shared > * buffer size (buffer_sz) exceeds what can be addressed by this single > page, > @@ -592,6 +638,7 @@ struct xensnd_open_req { > uint16_t reserved; > uint32_t buffer_sz; > grant_ref_t gref_directory; > + uint32_t period_sz; The same here. Just put a comment mentioning the version part. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |