[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

AW: AW: [Xen-users] Shared volume: Software-ISCSI or GFS orOCFS2?


  • To: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
  • From: "Rustedt, Florian" <Florian.Rustedt@xxxxxxxxxxx>
  • Date: Mon, 17 Nov 2008 16:23:33 +0100
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 17 Nov 2008 07:24:17 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AclIw3xRmCW+ctXxS4udIZDIEy2y6AAA+PRg
  • Thread-topic: AW: [Xen-users] Shared volume: Software-ISCSI or GFS orOCFS2?

I am "swimming around" for choosing the right technology to implement shared 
partititions between several vms' like /usr, /lib and /lib/modules, etc.

For that, i need to mount them mostly RO, but on one host RW, so that this host 
is the one where i can install/delete software and that's applied to all 
readonly-connects. 

I tried that with an normal xfs-partition and it crashes if i mount it in 
different modes: the ro-clients got IO-errors.

So i decided to find out more ways and first tried lvm-snapshots mounted rw, 
but they crashed, too.

So far now, i am at the point that i think that the best way is to use a 
cluster-aware file-system on the partitions?

First i thought, iscsi has some kind of integrated data-locking-mechanism so i 
can mount it multiple times without errors, but in between i know, that this is 
done by the filesystem, so iscsi seems to be no more interesting any more..

So best advice would be to format my shared partitions with GFS or OCFS2 and 
use them shared?

Kind regards, Florian

-----Ursprüngliche Nachricht-----
Von: Nick Couchman [mailto:Nick.Couchman@xxxxxxxxx] 
Gesendet: Montag, 17. November 2008 15:53
An: Rustedt, Florian
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Betreff: Re: AW: [Xen-users] Shared volume: Software-ISCSI or GFS orOCFS2?

I'm not sure where or how you want to use software iSCSI - maybe you could 
provide a more thorough description of your Xen environment?

As far as OCFS vs GFS, you can use whichever you like.  I use OCFS2 for two 
reasons: first because it's included with SLES 10, and second because I find it 
slightly easier to configure than GFS.  It has its downsides, too, though, but 
works fine for me.  Use whichever cluster-aware FS you want.

-Nick
Nick Couchman
Manager, Information Technology
**********************************************************************************************
IMPORTANT: The contents of this email and any attachments are confidential. 
They are intended for the 
named recipient(s) only.
If you have received this email in error, please notify the system manager or 
the sender immediately and do 
not disclose the contents to anyone or make copies thereof.
*** eSafe scanned this email for viruses, vandals, and malicious content. ***
**********************************************************************************************


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.