[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] The vdi is not available



I can't tell you, I've almost made you try everything I thought about.

At this stage, the only thing I would do is try to put some debug in the /opt/xensource/sm scripts such as NFSSR, nfs.py and so on to trace back where the shit happens but that's a lot of work.

Maybe you should try to completly reinstall the faulty xcp server, but this may not even be enough. And if it's not helping the next step would be to completly remove the NFS SR from the pool and readd it, but this would require you to stop all VMs on every server. Not doable in a production environnement.

If nothing of this worked. You're left with the quick and dirty fix to manually remount the storage if the server reboots.

At this point I've reached my limit in the knowledge of how XCP storage works.

Cheers,
SÃbastien



On 25.07.2013 19:25, Andres E. Moya wrote:
sorry, realised i was missing the mount point.. that did work....

so where do i find where it tries to use the wrong point point?

----- Original Message -----
From: "SÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>, xen-api@xxxxxxxxxxxxx
Sent: Thursday, July 25, 2013 1:08:02 PM
Subject: Re: [Xen-API] The vdi is not available

I don't get why it's not mounting with the uuid subdir. It should.

On our pool:

Jul 25 10:13:39 xen-blade10 SM: [30890] ['mount.nfs',
'10.50.50.11:/storage/nfs1/cc744878-9d79-37df-98cb-cd88eebdab61',
'/var/run/sr-mount/cc744878-9d79-37df-98cb-cd88eebdab61', '-o',
'soft,timeo=133,retrans=2147483647,tcp,actimeo=0']

as a temporary dirty fix you could try:

umount  /var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206
mount.nfs  10.254.253.9:/xen/9f9aa794-86c0-9c36-a99d-1e5fdc14a206 -o
soft,timeo=133,retrans=2147483647,tcp,noac

to manually remount it correctly


On 25.07.2013 18:48, Andres E. Moya wrote:
I restarted and tried to unplug and got the same message, here is the grep


[root@nj-xen-04 ~]# grep mount.nfs /var/log/SMlog
[31636] 2013-07-24 16:43:54.140961      ['mount.nfs', 
'10.254.253.9:/secondary', 
'/var/run/sr-mount/f21def12-74a2-8fab-1e1c-f41968e889bb', '-o', 
'soft,timeo=133,retrans=2147483647,tcp,noac']
[9277] 2013-07-25 12:36:42.416286       ['mount.nfs', '10.254.253.9:/iso', 
'/var/run/sr-mount/fbfbf5b3-a37a-288a-86aa-d8d168173f98', '-o', 
'soft,timeo=133,retrans=2147483647,tcp,noac']
[9393] 2013-07-25 12:36:43.241531       ['mount.nfs', '10.254.253.9:/xen', 
'/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206', '-o', 
'soft,timeo=133,retrans=2147483647,tcp,noac']


----- Original Message -----
From: "SÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>, xen-api@xxxxxxxxxxxxx
Sent: Thursday, July 25, 2013 12:29:24 PM
Subject: Re: [Xen-API] The vdi is not available

Okay, in this case try to reboot the server, and take a look if it fixed
the mount.

If not you should "grep mount.nfs /var/log/SMlog" and look what command
line XS use to mount your storage.


On 25.07.2013 18:22, Andres E. Moya wrote:
there are no tasks/ returns empty

Moya Solutions, Inc.
amoya@xxxxxxxxxxxxxxxxx
0 | 646-918-5238 x 102
F | 646-390-1806

----- Original Message -----
From: "SÃÆÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>, xen-api@xxxxxxxxxxxxx
Sent: Thursday, July 25, 2013 12:20:05 PM
Subject: Re: [Xen-API] The vdi is not available

xe task-list uuid=9c7b7690-a301-41ef-b7d5-d4abd8b70fbc

If it returns something

xe task-cancel uuid=9c7b7690-a301-41ef-b7d5-d4abd8b70fbc

then try again to unplug the pbd

OR

if nothing is running on the server, consider trying a reboot

Sorry this is hard to debug remotely.

On 25.07.2013 18:10, Andres E. Moya wrote:
xe pbd-unplug uuid=a0739a97-408b-afed-7ac2-fe76ffec3ee7
This operation cannot be performed because this VDI is in use by some other 
operation
vdi: 96c158d3-2b31-41d1-8287-aa9fb6d5eb6c (Windows Server 2003 0)
operation: 9c7b7690-a301-41ef-b7d5-d4abd8b70fbc (Windows 7 (64-bit) (1) 0)
<extra>: 405f6cce-d750-47e1-aec3-c8f8f3ae6290 (Plesk Management 0)
<extra>: dad9b85a-ee2f-4b48-94f0-79db8dfd78dd (mx5 0)
<extra>: 13b558f8-0c3f-4df9-8766-d8e1306b25d5 (Windows Server 2008 R2 (64-bit) 
(1) 0)

this was done on the server that has nothing running on it

Moya Solutions, Inc.
amoya@xxxxxxxxxxxxxxxxx
0 | 646-918-5238 x 102
F | 646-390-1806

----- Original Message -----
From: "SÃÆÃâÃâÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>
Cc: xen-api@xxxxxxxxxxxxx
Sent: Thursday, July 25, 2013 12:02:12 PM
Subject: Re: [Xen-API] The vdi is not available

This looks correct. You should maybe try to unplug / replug the storage
on server where it's wrong.

for example if it's on nj-xen-03:

pbd-unplug uuid=a0739a97-408b-afed-7ac2-fe76ffec3ee7
then
pbd-plug uuid=a0739a97-408b-afed-7ac2-fe76ffec3ee7

and check if it's then mounted  the right way.


On 25.07.2013 17:36, Andres E. Moya wrote:
[root@nj-xen-01 ~]# xe pbd-list sr-uuid=9f9aa794-86c0-9c36-a99d-1e5fdc14a206
uuid ( RO)                  : c53d12f6-c3a6-0ae2-75fb-c67c761b2716
                  host-uuid ( RO): b8ca0c69-6023-48c5-9b61-bd5871093f4e
                    sr-uuid ( RO): 9f9aa794-86c0-9c36-a99d-1e5fdc14a206
              device-config (MRO): serverpath: /xen; options: ; server: 
10.254.253.9
         currently-attached ( RO): true


uuid ( RO)                  : a0739a97-408b-afed-7ac2-fe76ffec3ee7
                  host-uuid ( RO): a464b853-47d7-4756-b9ab-49cb00c5aebb
                    sr-uuid ( RO): 9f9aa794-86c0-9c36-a99d-1e5fdc14a206
              device-config (MRO): serverpath: /xen; options: ; server: 
10.254.253.9
         currently-attached ( RO): true


uuid ( RO)                  : 6f2c0e7d-fdda-e406-c2e1-d4ef81552b17
                  host-uuid ( RO): dab9cd1a-7ca8-4441-a78f-445580d851d2
                    sr-uuid ( RO): 9f9aa794-86c0-9c36-a99d-1e5fdc14a206
              device-config (MRO): serverpath: /xen; options: ; server: 
10.254.253.9
         currently-attached ( RO): true

[root@nj-xen-01 ~]# xe host-list
uuid ( RO)                : a464b853-47d7-4756-b9ab-49cb00c5aebb
               name-label ( RW): nj-xen-03
         name-description ( RW): Default install of XenServer


uuid ( RO)                : dab9cd1a-7ca8-4441-a78f-445580d851d2
               name-label ( RW): nj-xen-04
         name-description ( RW): Default install of XenServer


uuid ( RO)                : b8ca0c69-6023-48c5-9b61-bd5871093f4e
               name-label ( RW): nj-xen-01
         name-description ( RW): Default install of XenServer



----- Original Message -----
From: "SÃÆÃâÃâÃââÃÆÃâÅÃâÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>, xen-api@xxxxxxxxxxxxx
Sent: Thursday, July 25, 2013 11:09:21 AM
Subject: Re: [Xen-API] The vdi is not available

Actually it is right to have:

10.254.253.9:/xen/9f9aa794-86c0-9c36-a99d-1e5fdc14a206 instead of
10.254.253.9:/xen

That is why on this non-working server your file resides in
/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/9f9aa794-86c0-9c36-a99d-1e5fdc14a206

When you create a NFS SR in XCP and specify for example
10.254.253.9:/xen as share to use, it will first create a directory on
the share with the id of the SR (9f9aa794-86c0-9c36-a99d-1e5fdc14a206)
and then it remounts 10.254.253.9:/xen/9f9aa794-86c0-9c36-a99d-1e5fdc14a206.

What is strange is that if your servers are in a pool they should share
the same mount path. Are they all in the same pool ?

Can you please post the results of a :

xe pbd-list sr-uuid=9f9aa794-86c0-9c36-a99d-1e5fdc14a206

and a

xe host-list

thanks


On 25.07.2013 16:51, Andres E. Moya wrote:
the mounts are not the same, but what is odd is that the servers that have it 
working correctly, actually seem to be mounting incorrectly?
please see below

the servers that are working correctly have

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  2.1G  1.7G  56% /
none                  373M   20K  373M   1% /dev/shm
10.254.253.9:/xen/9f9aa794-86c0-9c36-a99d-1e5fdc14a206
                             25T  127G   25T   1% 
/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206
10.254.253.9:/iso      25T  127G   25T   1% 
/var/run/sr-mount/fbfbf5b3-a37a-288a-86aa-d8d168173f98
//10.254.254.30/share
                            196G   26G  160G  14% 
/var/run/sr-mount/fc63fc27-89ca-dbc8-228d-27e3c74779bb

and the one that doesnt work has it in

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  2.0G  1.8G  54% /
none                  373M   24K  373M   1% /dev/shm
//10.254.254.30/share
                            196G   26G  160G  14% 
/var/run/sr-mount/fc63fc27-89ca-dbc8-228d-27e3c74779bb
10.254.253.9:/xen      25T  126G   25T   1% 
/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206
10.254.253.9:/iso      25T  126G   25T   1% 
/var/run/sr-mount/fbfbf5b3-a37a-288a-86aa-d8d168173f98


----- Original Message -----
From: "SÃÆÃâÃâÃââÃÆÃâ ÃÂÃâÂÃâÂÃÆÃâÃÂÃâÂÃÂÃÆÃâÅÃâÃÂbastien RICCIO" 
<sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>, xen-api@xxxxxxxxxxxxx
Sent: Thursday, July 25, 2013 10:38:02 AM
Subject: Re: [Xen-API] The vdi is not available

Okay I think you got something here.

do a df -h on each server to check the mount path for the SR on them.

Looks like one or more of your servers mounted it wrong.

/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/
this is not supposed to happen :(



On 25.07.2013 16:31, Andres E. Moya wrote:
I actually just took a look and in the the servers where everything is working 
correctly everything is under
/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/

and on the one that complains that it cant find the file, the file is actually 
located in
/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/

it's as if it is mounting the storage repository within itself.


how can i check if thin provisioning is enabled?


----- Original Message -----
From: "SÃÆÃâÃâÃââÃÆÃâ ÃÂÃâÂÃâÂÃÆÃâÃÂÃâ 
ÃÆÃÂÃÂÃâÅÃÂÃÂÃâÅÃÂÃÆÃâÃâÃââÃÆÃÂÃÂÃâÅÃÂÃâÃÂÃÆÃâÃÂÃâÂÃÂÃÆÃâÅÃâÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>
Cc: "xen-api" <xen-api@xxxxxxxxxxxxx>, "Alberto Castrillo" 
<castrillo@xxxxxxxxxx>
Sent: Thursday, July 25, 2013 10:21:44 AM
Subject: Re: [Xen-API] The vdi is not available

According to:

[25610] 2013-07-25 09:51:46.036917      ***** generic exception: vdi_attach: 
EXCEPTION SR.SROSError, The VDI is not available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]

[16462] 2013-07-25 10:02:49.485672 ['/usr/sbin/td-util', 'query', 'vhd',
'-vpf',
'/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd']

there is something wrong. It looks it tries to open a .raw file instead
of .vhd.

Maybe one of your server has been installed selecting the "thin
provisionning" feature and the others servers not ?

As far as I know thin provisioning uses vhd, non thin provisioning uses raw.
So if you have mixed installations that will not work when using a
shared storage between them.

My guess is that if you create VM on one server it will create a .vhd
image, and on the other a .raw image.
I can't be 100% certain as I've always used thin provisionning.

You maybe can check if you have mixed raw/vhd in
/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/




On 25.07.2013 16:04, Andres E. Moya wrote:
this was trying to start up the vm

[25610] 2013-07-25 09:51:45.997895      lock: acquired 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[25610] 2013-07-25 09:51:46.035698      Raising exception [46, The VDI is not 
available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]]
[25610] 2013-07-25 09:51:46.035831      lock: released 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[25610] 2013-07-25 09:51:46.036917      ***** generic exception: vdi_attach: 
EXCEPTION SR.SROSError, The VDI is not available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]
          File "/opt/xensource/sm/SRCommand.py", line 96, in run
            return self._run_locked(sr)
          File "/opt/xensource/sm/SRCommand.py", line 137, in _run_locked
            target = sr.vdi(self.vdi_uuid)
          File "/opt/xensource/sm/NFSSR", line 213, in vdi
            return NFSFileVDI(self, uuid)
          File "/opt/xensource/sm/VDI.py", line 102, in __init__
            self.load(uuid)
          File "/opt/xensource/sm/FileSR.py", line 370, in load
            opterr="%s not found" % self.path)
          File "/opt/xensource/sm/xs_errors.py", line 49, in __init__
            raise SR.SROSError(errorcode, errormessage)

[25610] 2013-07-25 09:51:46.037204      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr


and this is on a migrate(destination)

[29480] 2013-07-25 09:53:18.859918      lock: acquired 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[29480] 2013-07-25 09:53:18.897479      Raising exception [46, The VDI is not 
available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]]
[29480] 2013-07-25 09:53:18.897609      lock: released 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[29480] 2013-07-25 09:53:18.898701      ***** generic exception: vdi_attach: 
EXCEPTION SR.SROSError, The VDI is not available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]
          File "/opt/xensource/sm/SRCommand.py", line 96, in run
            return self._run_locked(sr)
          File "/opt/xensource/sm/SRCommand.py", line 137, in _run_locked
            target = sr.vdi(self.vdi_uuid)
          File "/opt/xensource/sm/NFSSR", line 213, in vdi
            return NFSFileVDI(self, uuid)
          File "/opt/xensource/sm/VDI.py", line 102, in __init__
            self.load(uuid)
          File "/opt/xensource/sm/FileSR.py", line 370, in load
            opterr="%s not found" % self.path)
          File "/opt/xensource/sm/xs_errors.py", line 49, in __init__
            raise SR.SROSError(errorcode, errormessage)

[29480] 2013-07-25 09:53:18.898972      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr

this is on migrate (source)

[16462] 2013-07-25 10:02:48.800862      blktap2.deactivate
[16462] 2013-07-25 10:02:48.800965      lock: acquired 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16462] 2013-07-25 10:02:48.819441      ['/usr/sbin/tap-ctl', 'close', '-p', 
'5578', '-m', '7']
[16462] 2013-07-25 10:02:49.295250       = 0
[16462] 2013-07-25 10:02:49.295467      ['/usr/sbin/tap-ctl', 'detach', '-p', 
'5578', '-m', '7']
[16462] 2013-07-25 10:02:49.299579       = 0
[16462] 2013-07-25 10:02:49.299794      ['/usr/sbin/tap-ctl', 'free', '-m', '7']
[16462] 2013-07-25 10:02:49.303645       = 0
[16462] 2013-07-25 10:02:49.303902      tap.deactivate: Shut down 
Tapdisk(vhd:/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd,
 pid=5578, minor=7, state=R)
[16462] 2013-07-25 10:02:49.485672      ['/usr/sbin/td-util', 'query', 'vhd', 
'-vpf', 
'/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd']
[16462] 2013-07-25 10:02:49.510929        pread SUCCESS
[16462] 2013-07-25 10:02:49.537296      Removed host key 
host_OpaqueRef:645996e3-d9cc-59e1-3842-65d679e9e080 for 
72ad514a-f1f8-4a34-9907-9c6a3506520b
[16462] 2013-07-25 10:02:49.537451      lock: released 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16462] 2013-07-25 10:02:49.537540      lock: closed 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16462] 2013-07-25 10:02:49.537641      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16462] 2013-07-25 10:02:49.537862      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16636] 2013-07-25 10:02:50.103352      lock: acquired 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16636] 2013-07-25 10:02:50.117961      ['/usr/sbin/td-util', 'query', 'vhd', 
'-vpf', 
'/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd']
[16636] 2013-07-25 10:02:50.137963        pread SUCCESS
[16636] 2013-07-25 10:02:50.139106      vdi_detach {'sr_uuid': 
'9f9aa794-86c0-9c36-a99d-1e5fdc14a206', 'subtask_of': 
'DummyRef:|ebe0d00f-b082-77ba-b209-095e71a0c1c7|VDI.detach', 'vdi_ref': 
'OpaqueRef:31009428-3c98-c005-67ed-ddcc5e432e03', 'vdi_on_boot': 'persist', 
'args': [], 'vdi_location': '72ad514a-f1f8-4a34-9907-9c6a3506520b', 'host_ref': 
'OpaqueRef:645996e3-d9cc-59e1-3842-65d679e9e080', 'session_ref': 
'OpaqueRef:f4170801-402a-0935-a759-19a46e700a87', 'device_config': {'server': 
'10.254.253.9', 'SRmaster': 'true', 'serverpath': '/xen', 'options': ''}, 
'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 
'OpaqueRef:fefba283-7462-1f5a-b4e2-d58169c4b318', 'vdi_uuid': 
'72ad514a-f1f8-4a34-9907-9c6a3506520b'}
[16636] 2013-07-25 10:02:50.139415      lock: closed 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16636] 2013-07-25 10:02:50.139520      lock: released 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16636] 2013-07-25 10:02:50.139779      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[17886] 2013-07-25 10:03:16.326423      sr_scan {'sr_uuid': 
'fc63fc27-89ca-dbc8-228d-27e3c74779bb', 'subtask_of': 
'DummyRef:|2f34582a-2b3b-82be-b1f6-7f374565c8e8|SR.scan', 'args': [], 
'host_ref': 'OpaqueRef:645996e3-d9cc-59e1-3842-65d679e9e080', 'session_ref': 
'OpaqueRef:bfffd224-6edc-1cb4-9145-c0c95cbb063b', 'device_config': {'iso_path': 
'/iso', 'type': 'cifs', 'SRmaster': 'true', 'location': 
'//10.254.254.30/share'}, 'command': 'sr_scan', 'sr_ref': 
'OpaqueRef:9c7f5cd0-fd88-16e2-2426-6e066a1183ab'}


----- Original Message -----
From: "SÃÆÃâÃâÃââÃÆÃâ ÃÂÃâÂÃâÂÃÆÃâÃÂÃâ ÃÆÃÂÃÂÃâÅÃÂÃÂÃâÅÃÂÃÆÃâÃâÃââÃÆÃÂÃÂÃâÅàÃÆÃâÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃâÃâÃââÃÆÃâ 
ÃÂÃâÂÃâÂÃÆÃâÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃâÂÃâÃÂÃÆÃâÃâÃââÃÆÃÂÃÂÃâÅÃÂÃâÃÂÃÆÃâÃÂÃâÂÃÂÃÆÃâÅÃâÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>, "Alberto Castrillo" 
<castrillo@xxxxxxxxxx>
Cc: "xen-api" <xen-api@xxxxxxxxxxxxx>
Sent: Wednesday, July 24, 2013 10:55:40 PM
Subject: Re: [Xen-API] The vdi is not available

Hi,

When this happens, what does /var/log/SMlog says ?

Can you please tail -f /var/log/SMlog on both source and destination,
try to migrate the VM and paste the results?

Cheers,
SÃÆÃâÃâÃââÃÆÃâ ÃÂÃâÂÃâÂÃÆÃâÃÂÃâ ÃÆÃÂÃÂÃâÅÃÂÃÂÃâÅÃÂÃÆÃâÃâÃââÃÆÃÂÃÂÃâÅàÃÆÃâÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃâÃâÃââÃÆÃâ 
ÃÂÃâÂÃâÂÃÆÃâÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃâÂÃâÃÂÃÆÃâÃâÃââÃÆÃÂÃÂÃâÅÃÂÃâÃÂÃÆÃâÃÂÃâÂÃÂÃÆÃâÅÃâÃÂbastien

On 24.07.2013 23:09, Andres E. Moya wrote:
I also just tried creating a new storage repository moving the vdi to the new 
storage repository is successful, i then try to migrate it to server C and 
still have the same issue

Moya Solutions, Inc.
amoya@xxxxxxxxxxxxxxxxx
0 | 646-918-5238 x 102
F | 646-390-1806

----- Original Message -----
From: "Alberto Castrillo" <castrillo@xxxxxxxxxx>
To: "xen-api" <xen-api@xxxxxxxxxxxxx>
Sent: Wednesday, July 24, 2013 4:12:13 PM
Subject: Re: [Xen-API] The vdi is not available



We use NFS as shared storage, and have faced some "VDI not available" issues 
with our VMs. I haven't been able to start a VM with the method of that URL in XCP 1.6 
(in 1.1 and 1.5 beta worked). What worked for me:


- Detach the VDI from the VM
- Detach and forget the SR where the VDI is stored
- Reattach the forgotten SR (create new SR, give the same info that the 
detached SR, re-use the SR-UUID, ...)
- Reattach the VDI to the VM




El 24/07/2013, a las 21:10, hook escribiÃÆÃâÃâÃââÃÆÃâ ÃÂÃâÂÃâÂÃÆÃâÃÂÃâ ÃÆÃÂÃÂÃâÅÃÂÃÂÃâÅÃÂÃÆÃâÃâÃââÃÆÃÂÃÂÃâÅàÃÆÃâÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃâÃâÃââÃÆÃâ 
ÃÂÃâÂÃâÂÃÆÃâÃâÃÂÃÆÃÂÃÂÃâÂÃÂÃâÃÂÃÆÃâÂÃâÃÂÃÆÃâÃâÃââÃÆÃÂÃÂÃâÅÃÂÃâÃÂÃÆÃâÃÂÃâÂÃÂÃÆÃâÅÃâÃÂ:



Past weekend (as usual O_o) we have experienced the issue in our XCP 1.6 
production pool.
Shared iSCSI storage was shutted down due to misconfigured UPS settings while 
XCP servers continued to work.


When storage was returned to working state and reconnected to pool most VM did 
not boot with the same message - VDI is not available.
Googling give me mentioned above method - forgot and reconnect VDI.
Result was even worser - the whole SR become unusable.
Storage rescan gazered lot of errors like bad header on LVM and many other.


Finally i've disconnect failed SR from pool, connect it back and SR become 
healthy (it looks so). But anyone VM was not start with disk from this SR and 
freeze during startup.
I did not find solution and restored most VMs from backup (long live VMPP!)


So, i just wanna say - be highly careful with VDI on shared storage repository 
in production environment)





2013/7/24 Brian Menges < bmenges@xxxxxxxxxx >


Have you tried the following?: 
http://community.spiceworks.com/how_to/show/14199-xcp-xen-cloud-platform-xenserver-the-vdi-is-not-available

- Brian Menges
Principal Engineer, DevOps
GoGrid | ServePath | ColoServe | UpStream Networks


-----Original Message-----
From: xen-api-bounces@xxxxxxxxxxxxx [mailto: xen-api-bounces@xxxxxxxxxxxxx ] On 
Behalf Of Andres E. Moya
Sent: Wednesday, July 24, 2013 09:32
To: xen-api@xxxxxxxxxxxxx
Subject: [Xen-API] The vdi is not available

Guys need help trouble shooting this issue

I have an xcp 1.6 pool with 3 machines A,B, and C

I can migrate from A to B and B to A

WE cannot migrate from A or B to C, we also cannot shutdown a vm and start it 
up on C, when we do that we get the message The vdi is not available.

We have tried removing machine C from the pool and re joining and still have 
the issue.

when we first add host C to the pool it cannot load the nfs storage repository 
because we need to create a management interface from a bonded vlan that gets 
created after joining the pool. After we create the interface and run a re plug 
on the storage repository it says its connected / re plugged.

Thanks for any help in advance


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

________________________________

The information contained in this message, and any attachments, may contain 
confidential and legally privileged material. It is solely for the use of the 
person or entity to which it is addressed. Any review, retransmission, 
dissemination, or action taken in reliance upon this information by persons or 
entities other than the intended recipient is prohibited. If you receive this 
in error, please contact the sender and delete the material from any computer.

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api









_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.