[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [MirageOS-devel] towards a common Mirage 'FLOW' signature
Hi,
On Fri, Jun 20, 2014 at 11:53 PM, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
I think I'm confused about 2 separate questions :-) I wonder whether we should extend flow to have all 4 of {read,write}{,_ack} Âso that we can control when data is acked/consumed. The TCP implementation would then only call the _ack function when it had received confirmation the data had been processed. If the TCP implementation needed to resend (possibly after a crash) it could call 'read' again and get back the same data it had before. So the result of 'read' would be an int64 stream offset * Cstruct.t, and 'read_ack' would mark an int64 offset as being consumed. This is what I'm doing in xenstore and shared-memory-ring: I don't know if anyone else wants this kind of behaviour. In the case where a VM sends a block write, which is then sent over NFS/TCP it would allow us to call write_ack on the flow to the guest when the TCP acks are received.
Separately, in the case of vchan the buffer size is set at ring setup time. If you connected a vhan ring to a TCP transmitter then the TCP transmitter, presumably with it's higher latency link, would try to keep its link full by buffering more. If the vchan ring size is smaller then the TCP window size (likely), TCP would have to copy into temporary buffers. If we knew we were going to need more buffered data then we could make the vchan ring larger and avoid the copying? Perhaps that wouldn't work due to alignment requirements. Anyway, this is more of a 'flow setup' issue than a during-the-flow issue. Perhaps a CHANNEL would be where we could close and re-open flows in order to adjust their parameters.
Cheers, Dave _______________________________________________ MirageOS-devel mailing list MirageOS-devel@xxxxxxxxxxxxxxxxxxxx http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |