[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] vbd Sharing
On Tue, 2007-02-06 at 02:53 +0100, Florian Heigl wrote: > Hi, > > I'm currently working on some clustering howto and trying to do my > work with an increasing number of Xen domUs. They run fedora core 6 > and are intended to share a number of OCFS2 filesystems, which all > reside on an EVMS volume. > > I'm now looking for a _clean_ way of enabling shared rw access. The only time this is an issue is when more than 1 guest using the cluster FS is on the same physical dom-0 node. the "w!" flag when specifying the VBD works very well. If your dom-u's root file system is ocfs2, be sure to specify an appropriate initrd to do the following : bring up eth(x) and (if iscsi) configure it. some means of obtaining a centralized cluster.conf if so desired modprobe ocfs2 .. pivot_root Then o2cb will take over the rest. I really recommend booting to a small local vbd, then arrange fstab in such a way that you facilitate your single system image if so desired. > I know > people have already done this, the only documented way I found so far > included hacking /etc/xen/scripts/block to the grade of disabling the > whole check it's intended to do, which isn't 'production grade' :) You did a sort of difficult search. Quite a bit of what turns up in the top 10 via any reasonable keyword search phrase will give you out dated information. > > I agree blocking shared rw accesses in general is a good thing [tm], > but I wonder what to do about the cases where it's not. This was "polished" quite a bit. > I could of course map the volumes to my fileserver and generate an > iSCSI target there, but I think I have other ways of maximizing > overhead :) AoE is *very* nice for this, and has a very small overhead cost, and no need for tcp offload cards since its a routless protocol. I recommend looking into it, migrating becomes very easy once you do. > > Any takers? > If not, who do should I submit a patch for 'block' to? > > Regards, > Florian Hope this helps. Best, --Tim _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |