[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] harddrive setup on new install xen 3.2.1 cent 5.2


  • To: "Terry Moore" <planohog@xxxxxxxxx>
  • From: "Todd Deshane" <deshantm@xxxxxxxxx>
  • Date: Tue, 5 Aug 2008 23:37:29 -0400
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 05 Aug 2008 20:38:09 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:reply-to:to:subject:cc:in-reply-to :mime-version:content-type:content-transfer-encoding :content-disposition:references; b=abY7dGjl3Ae436t0aKUjUQSwJK+JVVXDXZ3Ib/Q+eTps9rdyEm5H3Y4MQ+ZGWjxmD+ MxKTYXDA8Gu9FK+alyETVCh/3L4GbLfoOC1YPMFGuX0t7hlm0SgRWJRS4zXT+R0lRUKs HQK/s8K/ewbP7WgB4aYrTsfuVpRBI/+7+zBJs=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Tue, Aug 5, 2008 at 11:07 PM, Terry Moore <planohog@xxxxxxxxx> wrote:
> Greetings, moved from xen 3.0.1 with license  due to the fact the old
> will not see
> more than 15G of the 32G in the new hardware.  The old xen runs great
> and has gui too.
> Ok, I have xen 3.2  running and it has  a dom0 , time to load up a
> Domu ( with hopes of using
> images I have in production now ).  I can not seem to get past the disk 
> issues.
> ( i loaded cent5.2 from CD and let it pick out partition info for me,
> I know bad idea..  ).
>
> This is what i Have now..
> ####################
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00
>                      30472188   6057072  23156828  21% /
> /dev/sda1               101086     24323     71544  26% /boot
> tmpfs                 16482120         0  16482120   0% /dev/shm
> #####################
>
>
> [root@xen3 tmoore]# /usr/sbin/lvdisplay
>  --- Logical volume ---
>  LV Name                /dev/VolGroup00/LogVol00
>  VG Name                VolGroup00
>  LV UUID                sg4Ma7-uFiu-hIeE-pgaN-whmT-mPw9-q4gZEO
>  LV Write Access        read/write
>  LV Status              available
>  # open                 1
>  LV Size                276.81 GB
>  Current LE             8858
>  Segments               1
>  Allocation             inherit
>  Read ahead sectors     auto
>  - currently set to     256
>  Block device           253:0
>
>  --- Logical volume ---
>  LV Name                /dev/VolGroup00/LogVol01
>  VG Name                VolGroup00
>  LV UUID                rem4Lw-pysL-Ztgv-2HjR-oYjH-XYUN-nXxwih
>  LV Write Access        read/write
>  LV Status              available
>  # open                 1
>  LV Size                1.94 GB
>  Current LE             62
>  Segments               1
>  Allocation             inherit
>  Read ahead sectors     auto
>  - currently set to     256
>  Block device           253:1
>
> [root@xen3 tmoore]# /usr/sbin/pvdisplay
>  --- Physical volume ---
>  PV Name               /dev/sda2
>  VG Name               VolGroup00
>  PV Size               278.77 GB / not usable 19.68 MB
>  Allocatable           yes (but full)
>  PE Size (KByte)       32768
>  Total PE              8920
>  Free PE               0
>  Allocated PE          8920
>  PV UUID               0tzuXL-98LX-VMlt-fjOd-YfRz-qeoJ-q3Ox6U
> ##########################################################
>
>
> What I had invisioned was a large file system that I could build domU into
> and it would grow till I ran out of room.  ( same idea as xen 3.02 ) .
>

 /dev/VolGroup00/LogVol00 essential holds your root file system.

Can you post your guest config and give more detail into the domU
setup that have?

Is it a LVM question or is it a question of something that works different?

What is your plan to copy the guest images etc. from 3.01 to 3.2?

With some more details we should be able to help you more.

Cheers,
Todd

-- 
Todd Deshane
http://todddeshane.net
check out our book: http://runningxen.com

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.