[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AW: [Xen-users] Deploying redhat clusters under Xen 3.0.2



carlopmart wrote:
Many thanks Thomas.

thomas.vonsteiger@xxxxxxxxxx wrote:
  - can this change affects to all disks in the guest, included node
disk??

Yes, it has affects to all for xen guest defined disks.
Maybe in future we can define storage for xen guests nonshared by default or shared. This is only if you setup cluster with shared disk on one xen host.

  - How can I setup virtual storage in config file, as a normal disk
without any param??

I don't now your question.
Have a look here:
http://www.cl.cam.ac.uk/Research/SRG/netos/xen/readmes/user/user.html#SECTIO
N03300000000000000000

This is my storage config node1 for rhcs with gfs to play and learn about
this. All is done by lvm:

disk = [ 'phy:/dev/xenvg/xenRHEL4_3,ioemu:hda,w',
'phy:/dev/xenvg/xenCLU1fs1,ioemu:hdb,w',
'phy:/dev/xenvg/xenCLU1fs2,ioemu:hdc,w' ]

  - If I use GFS and two nodes writing at same time, can this virtual
storage be corrupted??

No, gfs is developed for such scenarios.
http://www.redhat.com/docs/manuals/csgfs/


Thomas

-----Ursprüngliche Nachricht-----
Von: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
bounces@xxxxxxxxxxxxxxxxxxx] Im Auftrag von carlopmart
Gesendet: Montag, 21. August 2006 09:24
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Betreff: Re: [Xen-users] Deploying redhat clusters under Xen 3.0.2

Tim, more information:

  - I wneed to use GFS filesystem for this virtual storage.
  - I do not use heartbeat, I have to deploy this configuration with
RedHat Cluster Suite.
  - Only one network interface per node using 100Mb private netowrk.

Thomas,

  About this:

  If you change in /etc/xen/scripts/block Line 105 and 116 from
       echo ‘local’
to:
       echo ‘ok’.

I have some questions:

  - can this change affects to all disks in the guest, included node
disk??
  - How can I setup virtual storage in config file, as a normal disk
without any param??
  - If I use GFS and two nodes writing at same time, can this virtual
storage be corrupted??

Many thanks



Tim Post wrote:
On Fri, 2006-08-18 at 22:42 +0200, carlopmart wrote:
Hi all,

  I need to deploy different RedHat servers with Cluster Suite under
Xen
(using Intel VT hardware for hosts). To accomplish this, I need to
setup
a virtual shared storage disk to serve for several nodes. Can Xen 3.0.2
support disk locking feature to deploy clusters (Load balancing and HA
configs) as vmware products support? Is this feature supported by xen?
And my last question: can I use virtual scsi disk to use as a shared
storage?

Many thanks.


Some more information about your cluster would be helpful :) How many
nics per node, and is one of those nics talking to a private gig-e
network?

We use xen to help manage or completely contain everything from open SSI to our own pound / lighty / drbd / heartbeat based concoctions and quite
frankly I can't see doing without Xen in the future when it comes to
both ha / load balanced and hpc clusters.

Have you looked at gfs?

Cheers
-Tim



--
CL Martinez
carlopmart {at} gmail {d0t} com

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.3/423 - Release Date: 18.08.2006




I had CLVM, CMAN, CCS, and GFS set up on a development Xen cluster. I just got rid of Xen because we were needed the full machine performance. Basically, we were going in the opposite direction that virtualization is. :) It worked fine most of the time, but there are some things I would do differently. First, don't let dom0 manage any of the storage. Use one of GNBD, iSCSI, AoE, etc. and dedicated private network cards for each domain. I wouldn't even bother bridging vifs. Second, use the new credit scheduler. If you decide to not take this advice, expect to be turning your CMAN timeouts way up when you find your domains stalling and getting fenced multiple times a day. :) It would be nice if the dom0 <-> domU traffic worked so fence_xen functioned. If you can't get it to work, say hello to fence_manual. Not fun.

--
Christopher G. Stach II

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.