[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: TCP wait_for transmit question



On Mon, Jul 16, 2012 at 10:35:38AM +0100, Richard Mortier wrote:
> 
> On 14 Jul 2012, at 17:10, Anil Madhavapeddy wrote:
> 
> > Interesting; so this check is also clamped to the TX MSS
> > (Tcp.Pcb.write_available) and not to the max_size of the application
> > buffer.
> > 
> > This is probably a good time to nail down the semantics of all these
> > different modules, particularly as vchan/shmem will be coming along
> > shortly.
> > 
> > Channel: buffered I/O, manual flush required
> > Flow: unbuffered I/O, will be triggered immediately
> > Tcp.Pcb: buffered if delay writes are used, unbuffered with nodelay
> > 
> > The TCP Nagle's buffer is necessary since only it knows if there are TX
> > packets in flight, whereas the Channel module doesnt...
> 
> naive question- what's the relationship between Tcp.Pcb and Flow then?
> ie., if i have a tcp connection underneath with nagling turned on, is my
> flow genuinely unbuffered, or just mostly so?

Depends if your question is the current relationship, or the future
relationship.  'Flow' is where non-TCP transports, such as vchan also sit,
with the same API.  'Channel' is application-level buffering (manually
flushed), whereas 'Tcp.Pcb' with nagling on would be stack-level buffering
(since it has to detect tx-in-flight which the application can't see).

It's an open question as to whether these buffers should be collapsed
(particularly the TCP nagle, which could be exposed more generically
through Flow).  We also need to look closely at the impact of the various
multipath transports before locking the Flow interface down, as it doesn't
make much sense to design a single-stream application API for use on a
fault-ridden cloud infrastructure.  I'm thinking of systems like SST [1]
and the SSH channel layer [2] as directions our interfaces should be
headed in.

For example, a multicasting block proxy VM for booting up 1000s of Windows
VMs has been brought up by Dave in the past as a good evaluation for the
Mirage stack.  This VM would act as a blkfront for a real block device,
cache blocks in RAM, and serve them up as copy-on-write RAMdisks to
booting VMs.  A very simple application from a Mirage perspective, but a
complete host-killer from a Xen pov.

[1] http://pdos.csail.mit.edu/uia/sst/
[2] http://tools.ietf.org/html/rfc4254

-anil



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.