[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: XenSock brainstorming



On Mon, 6 Jun 2016, Andrew Cooper wrote:
> On 06/06/16 10:33, Stefano Stabellini wrote:
> > Hi all,
> >
> > a couple of months ago I started working on a new PV protocol for
> > virtualizing syscalls. I named it XenSock, as its main purpose is to
> > allow the implementation of the POSIX socket API in a domain other than
> > the one of the caller. It allows connect, accept, recvmsg, sendmsg, etc
> > to be implemented directly in Dom0. In a way this is conceptually
> > similar to virtio-9pfs, but for sockets rather than filesystem APIs.
> > See this diagram as reference:
> >
> > https://docs.google.com/presentation/d/1z4AICTY2ejAjZ-Ul15GTL3i_wcmhKQJA7tcXwhI3dys/edit?usp=sharing
> >
> > The frontends and backends could live either in userspace or kernel
> > space, with different trade-offs. My current prototype is based on Linux
> > kernel drivers but it would be nice to have userspace drivers too.
> > Discussing where the drivers could be implemented it's beyond the scope
> > of this email.
> 
> Just to confirm, you are intending to create a cross-domain transport
> for all AF_ socket types, or just some?

My use case is for AF_INET, so that's what I intend to implement. If
somebody wanted to come along and implement AF_IPX for example, I would
be fine with that and I would welcome the effort.


> > # Goals
> >
> > The goal of the protocol is to provide networking capabilities to any
> > guests, with the following added benefits:
> 
> Throughout, s/Dom0/the backend/
> 
> I expect running the backend in dom0 will be the overwhelmingly common
> configuration, but you should avoid designing the protocol for just this
> usecase.

As always I am happy to make this as generic and reusable as possible.
The goals stated here are my goals with this protocol and I hope many
readers will share some of them with me. Although I don't have an
interest for running the backend in a domain other than Dom0, there is
nothing in the current design (or even my early code) that would prevent
driver domains from working.



> > * guest networking should work out of the box with VPNs, wireless
> >   networks and any other complex network configurations in Dom0
> >
> > * guest services should listen on ports bound directly to Dom0 IP
> >   addresses, fitting naturally in a Docker based workflow, where guests
> >   are Docker containers
> >
> > * Dom0 should have full visibility on the guest behavior and should be
> >   able to perform inexpensive filtering and manipulation of guest calls
> >
> > * XenSock should provide excellent performance. Unoptimized early code
> >   reaches 22 Gbit/sec TCP single stream and scales to 60 Gbit/sec with 3
> >   streams.
> 
> What happens if domU tries to open an AF_INET socket, and the domain has
> both sockfront and netfront ?

I wouldn't encourage this configuration. However it works more naturally
than one would expect: depending on how DomU is configured, if the
AF_INET socket calls are routed to the XenSock frontend, then they are
going to appear to come out from Dom0, otherwise they will be routed as
usual. So for example if the frontend is implemented in userspace, for
example in a modified libc library, then if applications in the guest
use the library, their data go through XenSock, otherwise they go
through netfront.


>  What happens if a domain has multiple sockfronts?

I don't think it should be a valid configuration. I cannot think of a
case where one would want something like that. But if somebody comes up
with a valid scenario on why and how this should work, I would be happy
to work with her to make it happen.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.