[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XCP: sr driver question wrt vm-migrate



This is usually the result of a failure earier on. Could you grep through the 
logs to get the whole trace of what went on? Best thing to do is grep for 
VM.pool_migrate, then find the task reference (the hex string beginning with 
'R:' immediately after the 'VM.pool_migrate') and grep for this string in the 
logs on both the source and destination machines. 

Have a look  through these, and if it's still not obvious what went wrong, post 
them to the list and we can have a look.

Cheers,

Jon


On 16 Jun 2010, at 07:19, YAMAMOTO Takashi wrote:

> hi,
> 
> after making my sr driver defer the attach operation as you suggested,
> i got migration work.  thanks!
> 
> however, when repeating live migration between two hosts for testing,
> i got the following error.  it doesn't seem so reproducable.
> do you have any idea?
> 
> YAMAMOTO Takashi
> 
> + xe vm-migrate live=true uuid=23ecfa58-aa30-ea6a-f9fe-7cb2a5487592 
> host=67b8b07b-8c50-4677-a511-beb196ea766f
> An error occurred during the migration process.
> vm: 23ecfa58-aa30-ea6a-f9fe-7cb2a5487592 (CentOS53x64-1)
> source: eea41bdd-d2ce-4a9a-bc51-1ca286320296 (s6)
> destination: 67b8b07b-8c50-4677-a511-beb196ea766f (s1)
> msg: Caught exception INTERNAL_ERROR: [ 
> Xapi_vm_migrate.Remote_failed("unmarshalling result code from remote") ] at 
> last minute during migration
> 
>> hi,
>> 
>> i'll try deferring the attach operation to vdi_activate.
>> thanks!
>> 
>> YAMAMOTO Takashi
>> 
>>> Yup, vdi activate is the way forward.
>>> 
>>> If you advertise VDI_ACTIVATE and VDI_DEACTIVATE in the 'get_driver_info' 
>>> response, xapi will call the following during the start-migrate-shutdown 
>>> lifecycle:
>>> 
>>> VM start:
>>> 
>>> host A: VDI.attach
>>> host A: VDI.activate
>>> 
>>> VM migrate:
>>> 
>>> host B: VDI.attach
>>> 
>>>  (VM pauses on host A)
>>> 
>>> host A: VDI.deactivate
>>> host B: VDI.activate
>>> 
>>>  (VM unpauses on host B)
>>> 
>>> host A: VDI.detach
>>> 
>>> VM shutdown:
>>> 
>>> host B: VDI.deactivate
>>> host B: VDI.detach
>>> 
>>> so the disk is never activated on both hosts at once, but it does still go 
>>> through a period when it is attached to both hosts at once. So you could, 
>>> for example, check that the disk *could* be attached on the vdi_attach 
>>> SMAPI call, and actually attach it properly on the vdi_activate call.
>>> 
>>> Hope this helps,
>>> 
>>> Jon
>>> 
>>> 
>>> On 7 Jun 2010, at 09:26, YAMAMOTO Takashi wrote:
>>> 
>>>> hi,
>>>> 
>>>> on vm-migrate, xapi attaches a vdi on the migrate-to host
>>>> before detaching it on the migrate-from host.
>>>> unfortunately it doesn't work for our product, which doesn't
>>>> provide a way to attach a volume to multiple hosts at the same time.
>>>> is VDI_ACTIVATE something what i can use as a workaround?
>>>> or any other suggestions?
>>>> 
>>>> YAMAMOTO Takashi
>>>> 
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.