[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Detach specific partition LVM of XEN


  • To: "Stephan Seitz" <s.seitz@xxxxxxxxxxxx>
  • From: "Souza Anderson" <souzalix@xxxxxxxxx>
  • Date: Thu, 12 Jun 2008 16:32:42 -0600
  • Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 12 Jun 2008 15:36:57 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=o4M6XELYFPzFiYe7alaXrwDysm/U4YRe2NxbmZrl0Q+l8tKeMyHC0H0GXQ6KwnPEHZ QqJnJQqKfs4aagj91Hasm7V2OdmCl11X9uTEOoXgoH2TAAmlJoCBE9dZR7eSxz1V2C1s Y5bWXFr6IDN9/o9l/0s9xVs4ZBpbaQExQizP4=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi Stephan, problem solved I just used fuser to identify processes that was using files or sockets open over LVM partition. On the my archive .cfg of my VM I just take off the vfb = [ 'type=vnc,vncdisplay=0,vnclisten=0.0.0.0,vncpasswd=*****' ]
Thanks so much!!!!
 
fuser -a /dev/VGxen/VM01_Debian
/dev/VGxen/VM01_Debian:  6567  6570  6571  6572  6573

srv01-Debian:/# ps -ef |grep -i 6567
root      6567     1  0 15:18 ?        00:00:00 /usr/lib/xen/bin/qemu-dm -d 5 -domain-name vm01 -vnc 0.0.0.0:0,password -serial pty -M xenpv
root      6570  6567  0 15:18 ?        00:00:00 /usr/lib/xen/bin/qemu-dm -d 5 -domain-name vm01 -vnc 0.0.0.0:0,password -serial pty -M xenpv
root      6837  5030  0 15:37 pts/1    00:00:00 grep -i 6567
srv01-Debian:/# ps -ef |grep -i 6571
root      6571  6570  0 15:18 ?        00:00:00 /usr/lib/xen/bin/qemu-dm -d 5 -domain-name vm01 -vnc 0.0.0.0:0,password -serial pty -M xenpv
root      6839  5030  0 15:37 pts/1    00:00:00 grep -i 6571
srv01-Debian:/# ps -ef |grep -i 6572
root      6572  6570  0 15:18 ?        00:00:00 /usr/lib/xen/bin/qemu-dm -d 5 -domain-name vm01 -vnc 0.0.0.0:0,password -serial pty -M xenpv
root      6841  5030  0 15:37 pts/1    00:00:00 grep -i 6572
srv01-Debian:/# ps -ef |grep -i 6573
root      6573  6570  0 15:18 ?        00:00:00 /usr/lib/xen/bin/qemu-dm -d 5 -domain-name vm01 -vnc 0.0.0.0:0,password -serial pty -M xenpv
root      6843  5030  0 15:37 pts/1    00:00:00 grep -i 6573
 
rv01-Debian:~# lvchange -an /dev/VGxen/VM01_Debian
srv01-Debian:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VGxen/VM01_Debian
  VG Name                VGxen
  LV UUID                vsdRV1-j7cA-LlZI-dAvv-yPC1-lCEt-OQY3W3
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                17.69 GB
  Current LE             4529
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
srv01-Debian:~# lvremove /dev/VGxen/VM01_Debian
  Logical volume "VM01_Debian" successfully removed
srv01-Debian:~# vgdisplay
  --- Volume group ---
  VG Name               VGxen
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               17.69 GB
  PE Size               4.00 MB
  Total PE              4529
  Alloc PE / Size       0 / 0
  Free  PE / Size       4529 / 17.69 GB
  VG UUID               cHQRPL-cJEN-UkSl-kQFT-rdZ7-BXVf-nbF8wR


 
2008/6/12 Souza Anderson <souzalix@xxxxxxxxx>:
Hi Stephan I tried it but I didn't get success. I tried another specific troubleshooting, so nothigng I send the versions of my packets lvm and udev both over Debian etch..
 
 
srv01-Debian:/# xm  block-list vm01 --long
(51712
    ((virtual-device 51712)
        (device-type disk)
        (protocol x86_32-abi)
        (backend-id 0)
        (state 4)
        (backend /local/domain/0/backend/vbd/3/51712)
        (ring-ref 8)
        (event-channel 6)
    )
)

srv01-Debian:/# kpartx -l /dev/VGxen/VM01_Debian
VM01_Debian1 : 0 33203680 /dev/VGxen/VM01_Debian 32
VM01_Debian2 : 0 3897856 /dev/VGxen/VM01_Debian 33203712
srv01-Debian:/# kpartx -d /dev/VGxen/VM01_Debian
srv01-Debian:/# kpartx -l /dev/VGxen/VM01_Debian
srv01-Debian:/#

srv01-Debian:/#lvchange -an /dev/VGxen/VM01_Debian
  LV VGxen/VM01_Debian in use: not deactivating
xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   238     1     r-----     71.9
vm01                                         4   256     1     -b----      3.9
xm block-list vm01 --long
(51712
    ((virtual-device 51712)
        (device-type disk)
        (protocol x86_32-abi)
        (backend-id 0)
        (state 4)
        (backend /local/domain/0/backend/vbd/5/51712)
        (ring-ref 8)
        (event-channel 6)
    )
)
srv01-Debian:/# xm block-detach vm01 51712
srv01-Debian:/# xm block-list vm01 --long
srv01-Debian:/# kpartx -l /dev/VGxen/VM01_Debian

srv01-Debian:/# lvchange -an /dev/VGxen/VM01_Debian
  LV VGxen/VM01_Debian in use: not deactivating
srv01-Debian:/#
 dpkg -l |grep -i udev
ii  udev                              0.105-4                                  /dev/ and hotplug management daemon
srv01-Debian:/# dpkg -l |grep -i lvm
ii  lvm-common                        1.5.20                                   The Logical Volume Manager for Linux (common
ii  lvm2                              2.02.06-4etch1                           The Linux Logical Volume Manager
srv01-Debian:/# xm destroy vm01
srv01-Debian:/# lvchange -an /dev/VGxen/VM01_Debian

  LV VGxen/VM01_Debian in use: not deactivating
 
Thanks

2008/6/12 Stephan Seitz <s.seitz@xxxxxxxxxxxx>:

Hi,

i assume your lvm problem is caused by an overeager udev configuration
which sets up device entries for partitions _inside_ the lv as well as for
the lv itself.

try to
# kpartx -d /dev/yourvg/yourlv
to release these mapper entries before lvremove the lv.


if you renamed this lv, you need to check and re-assign /dev/mapper entries
manually before you're able to remove the entries.

when you're done, check for updated udev and/or lvm packages for your distro.

cheers,

Stephan




Souza Anderson schrieb:
Hi...
 I have had a problem when I am going to detach one specific LVM partitions of Xen, so I have been trying xm destroy <domain>, lvchange -an <lvm_partition>, lvremove -f.... So I haven't had sucess. I restarted the server with init 1 yet and nothing... I have seem two specific process started xenwatch and xenbus, but I am not sure if this processes have some action over LVM partitions + XEN. I need to know how can I remove this partition to build another xen VM, an if  I need to stop the xend and xendomais services to do the detachment.
 Thanks so much!!!!
   VG Name                VGxen
 LV UUID                vsdRV1-j7cA-LlZI-dAvv-yPC1-lCEt-OQY3W3
 LV Write Access        read/write
 LV Status              available
 # open                 4
 LV Size                17.69 GB
 Current LE             4529
 Segments               1
 Allocation             inherit
 Read ahead sectors     0
 Block device           253:0
srv01-Debian:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   238     1     r-----     29.2
vm01                                         1   256     1     -b----      0.1
srv01-Debian:~# xm destroy vm01
srv01-Debian:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   238     1     r-----     31.5
srv01-Debian:~# lvdisplay
 --- Logical volume ---
 LV Name                /dev/VGxen/VM01_Debian
 VG Name                VGxen
 LV UUID                vsdRV1-j7cA-LlZI-dAvv-yPC1-lCEt-OQY3W3
 LV Write Access        read/write
 LV Status              available
 # open                 2
 LV Size                17.69 GB
 Current LE             4529
 Segments               1
 Allocation             inherit
 Read ahead sectors     0
 Block device           253:0
srv01-Debian:~# lvchange -an /dev/VGxen/VM01_Debian
 LV VGxen/VM01_Debian in use: not deactivating
srv01-Debian:~#
srv01-Debian:~# /etc/init.d/xendomains stop
Shutting down Xen domains:  [done]
srv01-Debian:~# lvchange -an /dev/VGxen/VM01_Debian
 LV VGxen/VM01_Debian in use: not deactivating
srv01-Debian:~#
srv01-Debian:~# /etc/init.d/xend stop
srv01-Debian:~# lvchange -an /dev/VGxen/VM01_Debian
 LV VGxen/VM01_Debian in use: not deactivating
srv01-Debian:~# ps -ef |grep -auxf xen
grep: xen: No such file or directory
srv01-Debian:~# ps -ef |grep xen
root         9     7  0 14:12 ?        00:00:00 [xenwatch]
root        10     7  0 14:12 ?        00:00:00 [xenbus]
root      4698     1  0 14:13 ?        00:00:00 xenstored --pid-file /var/run/xenstore.pid
root      4705     1  0 14:13 ?        00:00:00 xenconsoled
root      4706  4705  0 14:13 ?        00:00:00 xenconsoled
root      4707  4706  0 14:13 ?        00:00:00 xenconsoled
root      5331  5030  0 14:19 pts/1    00:00:00 grep xen
 srv01-Debian:~# lvremove -f VGxen
 Can't remove open logical volume "VM01_Debian"
srv01-Debian:~# lvremove -f /dev/VGxen/VM01_Debian
 Can't remove open logical volume "VM01_Debian"
srv01-Debian:~# lvs
 LV          VG    Attr   LSize  Origin Snap%  Move Log Copy%
 VM01_Debian VGxen -wi-ao 17.69G
srv01-Debian:~# vgs
 VG    #PV #LV #SN Attr   VSize  VFree
 VGxen   1   1   0 wz--n- 17.69G    0
srv01-Debian:~# pvs
 PV         VG    Fmt  Attr PSize  PFree
 /dev/hda5  VGxen lvm2 a-   17.69G    0


------------------------------------------------------------------------

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


--
Stephan Seitz
Senior System Administrator

*netz-haut* e.K.
multimediale kommunikation

zweierweg 22
97074 würzburg

fon: +49 931 2876247
fax: +49 931 2876248

web: www.netz-haut.de <http://www.netz-haut.de/>

registriergericht: amtsgericht würzburg, hra 5054


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.