[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [RFC] Making Xen cluster aware - locking domains on shared storage
Hi! These days I had the idea, that it should not be that hard to pimp 'xm' to check if a domain already is running somewhere in the cluster if there is a shared storage available. The basic idea is the following: Consider a cluster of n (>= 2) Xen nodes which share some storage (e.g. a SAN or NAS or whatever) where the storage for the DomUs is located and where some piece of shared (cluster) filesystem is located, too. Let's asume further the shared file system is mountet to /xen_cluster on all nodes. Now it would come in handy if xm (or maybe some other part of the Xen univserse) could create some kind of lockfile on this FS to indicate that a given domain is already running somewhere in this cluster (maybe the Dom0 could be written into this file). This way it would be quite easy to prevent trouble from people running Xen in this kind of environment, as on domain could only be started once on the cluster. So I've made up there patches * [0] Example options for /etc/xen/xend-config.sxp * [1] for options parsing for /etc/xen/xend-config.sxp * [2] Basic implementation of lockfile checking/creation during 'xm create' phase. The interesting part (shutdown, destroy, reboot(?), migrate, ...) is missig. So what do you think about all this? I guess there is no big resistance against the idea as such. What about the implementation approach? Is this somethings you could imagine as a direction to go to? Or maybe handle all of this internally to drop the dependancy of the shared file system? [0] http://files.rfc2324.org/patches/xen/xend-config.sxp.diff [1] http://files.rfc2324.org/patches/xen/XendOptions.py.diff [2] http://files.rfc2324.org/patches/xen/create.py.diff Ciao Max -- Follow the white penguin. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |