[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Shared SAN disk LUN between 2 servers and migration problem



Pekka.Panula@xxxxxxxx schrieb:
>> George Rushby <george@xxxxxxxxxxxxxx> 
>> Sent by: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
>>
>> 16.10.2008 11:50
>>
>> To
>>
>> "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
>>
>> cc
>>
>> Subject
>>
>> RE: [Xen-users] Shared SAN disk LUN between 2 servers and migration 
> problem
>> You are connecting to the same volume with at least 2 servers. This 
>> will cause your files system to eventually corrupt. The array has no
>> knowledge of your operating system at this level you should think of
>> a volume as a disk drive. If you are not using a cluster you must 
>> restrict access to the volume to only one server. Once you connect 
>> to the volume you share the volume through the server. Most people 
>> will set up a server with 2 NIC's one to connect to the iscsi vlan 
>> setup for the array and the other is on the public vlan. This way 
>> you separate your traffic and are able to share the data on the volume.
>>
>> You should also look in GFS
> 
> But why you need GFS if only one server is doing access to shared LUN at 
> the time. Server 2 is running DomU and its accessing multipath device 
> there (reading/writing) but on another server its just there waiting, no 
> server is actually accessing it until server is migrated to there. I am 
> running only one access per multipathed LUN at the time, not several nodes 
> to same LUN, i dont need active/passive failover as i just want to migrate 
> servers when i am doing maintenance, eg. rebooting Dom0 etc. Other Dom0s 
> do not touch my multipath device at all. Meaning one multipath device per 
> DomU server. And this is on FC SAN Disk system, so servers are pointed to 
> same LUN on array but only one server is writing it at same time.
> 
> So i am lost here? This does not work, eg. Linux/Xen/multipath device does 
> not sync all operations to block device when Xen is migrating server to 
> another server? Why Xen does not say to OS to sync data to disk when 
> migration is happening, or does it?

You might want to look for "block-iscsi". You'll find it via google or
markmail as it has been posted on the ML as well.
This script will connect a LUN on domU startup and disconnect on
powerdown. The block-scripts at all are aware of domU migration.

The only thing you can't do without a cluster filesystem is running a
remotely connected machine on more than one dom0.

> 
> Anyway on HVMed Windows 2003 Standard Server i am getting corruption when 
> i do file access during migration operation, i have tested this by 
> installing 7-zip compression program and put it to compress eg. windows 
> -directory and then migrate it to another server and then back to original 
> and then when verifying compressed file 7-zip says lots of files are 
> corrupted. Not tested on PV host, but anyway, currently my need is to run 
> many Windows servers so i need to get this working. Ofc i can do it now 
> without migration, by manually shutting down DomU and copying its Xen 
> configuration to another server and starting server there...

> 
> Terveisin/Regards,
>    Pekka Panula, Net Servant Oy
> 
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


-- 
Stephan Seitz
Senior System Administrator

*netz-haut* e.K.
multimediale kommunikation

zweierweg 22
97074 würzburg

fon: +49 931 2876247
fax: +49 931 2876248

web: www.netz-haut.de <http://www.netz-haut.de/>

registriergericht: amtsgericht würzburg, hra 5054

Attachment: s_seitz.vcf
Description: Vcard

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.