[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Re: Filesystems that support migration [Was: Re: Live Migration Config]


  • To: <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Paul Tap" <paul.tap@xxxxxxxxxxx>
  • Date: Tue, 1 Nov 2005 18:06:59 +0100 (CET)
  • Delivery-date: Tue, 01 Nov 2005 17:04:09 +0000
  • Importance: Normal
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Tue, 1 Nov 2005, Tom Brown wrote:
> From: Tom Brown <tbrown@xxxxxxxxxxxxx>
> Subject: Re: Filesystems that support migration [Was: Re: [Xen-users]
>       Re:     Live Migration Config]
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Message-ID:
>       <Pine.LNX.4.44.0510311329380.1683-100000@xxxxxxxxxxxxxxxxxx>
> Content-Type: TEXT/PLAIN; charset=US-ASCII
>
> On Mon, 31 Oct 2005, Nate Carlson wrote:
>
>> On Sun, 30 Oct 2005, Tom Brown wrote:
>> > but then I haven't managed to build a filesystem that could be
>> migrated and allows high performance... so it isn't much of a loss
>> :)
>> >
>> > [nfs works, but performance bites when compared to a fully cached
>> local block device... anyone wanna start a new thread?]
>>
>> I just use a SAN with fibre channel.. works great. I'm using CLVM, so
>> all
>
> a brand name solution, or something cooked up?
>
>> the xen0 boxes see the block devices with the same names; also using
>> GFS to share home directories and such within the domU's.
>
> I looked at trying to get GFS running, but it looked like getting it to
> work was going to be a lot of work, and I wasn't sure if there would be
> patch conflicts with XEN.
>
>> Beats the crap out of the NFS solution I used to use.. :)
>
> That is encouraging to hear. I was considering looking at iSCSI and/or
> ATA over ethernet.
>
I'm currently running FC4/Xen using LVM volumes that are offered via AOE
(raidBlade/20 plus a 10disk PATA bay in RAID10). Works fine. I haven't
tried any real perfomance tests though and I'm having an issue with
migration, but as far as I can see that's not related to the storage. The
/etc/xen is also shared. The managment node has it rw, all other Xen0
domains mount ro. I'm looking at clustering FSs but thus far I don't need
it.

Each physical server has 4 Gb NICs (IBM Blade HS20). Two redundant for the
Storage (no IP trafic) and two for IP trafic. In the future I may bring
the Xen0 IP trafic to the storage network, for security reasons and to
seperate the "Physical layer" from the virtual one.

I've done some succesful tests with the blades booting via PXE/AOE, but
since I'm currently running default FC4 I have skipped that for later.

> Anyone else got any suggestions/tips? Hopefully I'll actually have time
> to work on this tomorrow, so soliciting ideas today seems like a good
> idea :-)
>
> -Tom

Cheers,

Paul

-- 
Armorica
Open Source Software - Consultancy

Berkelstraat 91
3522 EL Utrecht
Telefoon: +31 30 289 4890
Mobiel:   +31 653 269 629
e-Mail:   paul.tap@xxxxxxxxxxx



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.