[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: Interdomain comms
On 5/7/05, Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote: > > I'd need help from security experts but, as an initial stab, if the > idc_address and remote_buffer_references are capabilities then I think > the security falls out in the wash since it's impossible to access > something unless you have been granted permission. > That seems straightforward and clear to me on the local host, but it seems like there might be additional concerns when bridging to the cluster. Of course, those security issues may be embedded in the underlying network transport layer, so maybe its not as much of a concern. > > OK, here goes: > > A significant difference between Plan 9 and Xen which is relevant to > this discussion is that Plan 9 is designed to construct a single shared > environment from multiple physical machines whereas Xen is designed to > partition a single physical machine into multiple isolated environments. > Arguably, Xen clusters might also partition multiple physical machines > into multiple isolated environments with some weird and wonderful > cross-machine sharing and replication going on. > Yes and no, Plan 9 does provide a coherent mechanism to unify access to the resources of an entire cluster of physical machines -- but it also provides a lot of facilities for partitioning and organizing those resources into a private name spaces. Its both of these aspects that Ron and I would look to see leveraged in any sort of future Xen I/O architecture. But this gets more into organizational features, which may be a separate topic. > > The significance of this difference is that in the Xen environment, > there are many interesting opportunities for optimisations across the > virtual machines running on the same physical machine. These > optimisations are not relevant to a native Plan 9 system and so (AFAICT > with 20 mins experience :-) ) there is no provision for them in 9P. > This is true. The example you step through sounds like an implementation of DSM targeted at a buffer cache. In the past, 9P has not been used to provide such a level of transparent sharing of underlying memory. However, an area that Orran and I have been talking about is exploring the addition of scatter/gather type semantics to the 9P protocol implementations (there's really not that much that has to change in the specification, just some differences in the way the protocol looks on the underlying transport). In other words, there is nothing to prevent read/write calls from having pointers to the page containing the data versus having a copy of the data. This page containing the data could be a copy, or it could be shared as in your example. The cool thing is, since 9P is already setup to be a network protocol, if it did end up having to leave shared memory and go over an external wire, there's already a fairly rich organizational infrastructure in place (with built-in support for security/encryption/digesting). > > Had the virtual machines been booting on different physical machines > then the path through the FE and BE client code would have been > identical (so we have met the network transparency goal) but the IDC > implementation would have taken an alternative path upon discovering > that the remote_buffer_reference was genuinely remote. > The scenario you walk through sounds really great, and providing mechanisms to manage and recognize shared buffers on the same machine sounds like absolutely the right thing to do. I'm just arguing for a simpler client interface (and I think something akin to the 9P API together with a nice Channel abstraction to management endpoints would be the right way to go about it). Okay Ron, you got me into this, what are your thoughts? > Hopefully this explains better where I'm coming from. This paints a clearer picture of what you were going after, and I think we are on similar tracks. I need to get more engaged in looking at the Xen stuff so I have a better context for some of the problems particular to its environment. -eric _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |