[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Inter-domain Communication using Virtual Sockets (high-level design)

At 17:19 +0100 on 17 Jun (1371489597), David Vrabel wrote:
> The connection manager is a user space process that opens a AF_VSOCK
> listening socket on port 1. 

OK; and the kernel transport in the backend plumbs that over a
pre-arranged shared ring to the kernel transport in the frontend, which
terminates it there (i.e. all traffic on that link is connection-setup
chatter and frontend userspace can't actually talk to the manager?

In that case I think that giving the manager a socket-level name
(i.e. '0x7ff1:1') is just confusing (at least it confused me!), since
it's not really a socket connection, at least at that end.

> The vsock transport of the frontend
> effectively connects to this port (but since its in kernel code it
> doesn't use the socket API).


> > What does that look like at the socket interface?  Would an AF_VSOCK
> > socket transparently stay open across migrate but connect to a different
> > backend?  Or would it be torn down and the application need to DTRT
> > about re-connecting?
> All connections are disconnected on migration.  The applications will
> need to be able to handle this.


> >> The toolstack sets the policy in the connection manager
> >> to allow connection requests.  The default policy is to deny
> >> connection requests.
> > 
> > Hmmm.  Since the underlying transports use their own ACLs (e.g. grant
> > tables), the connection manager can't actually stop two domains from
> > communicating.  You'd need to use XSM for that.
> I think there are two security concerns here.
> 1. Preventing two co-operating domains from setting up a communication
> channel.
> And,
> 2. Preventing a domain from connecting to vsock services listening in
> another domain.
> As you say, the connection manager does not address the first and XSM
> would be needed.  This isn't something introduced by this design though.


> For the second, I think the connection manager does work here and I
> think it is useful to have this level of security without having a
> requirement to use XSM.

Fair enough.  Maybe it just needs a big warning in the docs saying
"don't think you can use this to isolate VMs; there are other channels
besides VSOCK".

> >> 2. Providing a mechanism for two domains to exchange the grant
> >>    references and event channels needed for them to setup a shared
> >>    ring transport.
> > 
> > If they already want to talk to each other, they can communicate all
> > that in a single grant ref (which is the same size as an AF_VSOCK port).
> The shared rings are per-peer not per-listener.  If a peer becomes
> compromised and starts trying a DoS attack (for example), the ring can
> be shutdown without impacting other guests.

What I meant to say was: if the frontend has a 64-bit address, and the
backend is expecting the connection, you could just make the address be
domid::grantid and stuff the event-channel info into the shared page. 

But I see now that the actual interesting part is in brokering
connection requests from as-yet-unknown peers.  That leads to the next

> > So I guess the purpose is multiplexing connection requests: some sort of
> > listener in the 'backend' must already be talking to the manager (and
> > because you need the manager to broker new connections, so must the
> > frontend).
> > 
> > Wait, is this connection manager just xenstore in a funny hat?  Or could
> > it be implemented by adding a few new node/permission types to xenstore?
> Er yes, I think this is just xenstore in a funny hat.  Reusing xenstore
> would seem preferable to implementing a new daemon.

That sounds good to me.  I think that some equivalent of the unix sticky
bit could make this brokering fit into the xenstore model.  Maybe we
could have a type of node where other VMs could make subnodes as long as
those subnodes were named with the creator's domid/uuid.  Or something
along those lines.



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.