[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] lustre clustre file system and xen 3
Luster is built for large super clusters, (compute clusters) Its built to be fast (over a few gigs/s throughput) but sucks for small files. We will be evaluating luster/pvfs2/gpfs for our compute clusters (about 800 nodes) im not sure how luster does failure when a storage node dies, if you find out let me know. But with luster you would have to store system images as files not devices. Others can chime in but in most the xen docs ive read file backed VM's were not the preferred. A database cluster file system like ocfs or somthing else would prob work better for reliability. Brock On May 18, 2006, at 2:25 PM, Karsten Nielsen wrote: Mayby I frased my question wrong. I have read a lot on the mailing list about pros and cons of different ways to make the file backend avalible to multiple physical servers.But it seems that there are no real good answer to that question as fare as I have read. There are pros and cons to every solution.What I was looking for is a file backend that performs very well and is relayable.If I want to use ocfs2 I cannot resize the file system. (http:// www.mail-archive.com/ocfs2-users@xxxxxxxxxxxxxx/msg00059.html) If I want to use GFS it's performence is not that great (http:// guialivre.governoeletronico.gov.br/mediawiki/index.php/ TesteGFSGraficoRaid10_ext3vsgfs and http:// guialivre.governoeletronico.gov.br/mediawiki/index.php/TestesGFS )Mayby I am making this to complicated and should not worry about the lock system of clustre file systems what I am looking for is realy performance and relyability.Any hints ? And why do you think that Lustre is at bad idea ? Christopher G. Stach II wrote:Karsten Nielsen wrote:How will you make the LVM2 or raw partitions avalible to the applicationservers ? - i have 2 physical application servers and 1 file backend server. That means that i have 3 servers.How would you make a file backend available to multiple physicalservers? That's a rhetorical question, but to answer yours, probably NBD._______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |