[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] virtual disk/block-device problem



I am trying to write a script to handle setup of domains using virtual
block devices for their root FS and have been unable to get the virtual
devices to work with any sort of consistency.  This is using xen-1.1.bk
on Redhat 7.3 (I had to rebuild all the tool binaries to run on 7.3).

Anyway, I would first like to make sure I am using the vd/vbd stuff correctly.
Here is an example of the problem I see.  I have a partition to back the vds:

        xenctl partitions add -phdb2 -f

hdb2 is of course the second partition on the second disk.  I am assuming
that the partition doesn't need to have a filesystem on it, i.e., it just
uses the raw blocks for storage?  I create three vds:

        xenctl vd create -nD1 -s128M
        xenctl vd create -nD2 -s128M
        xenctl vd create -nD3 -s128M

Then I use the returned keys to create virtual block devices:

        xenctl vbd create -n0 -w -v0 -k3827077824
        xenctl vbd create -n0 -w -v1 -k8055618233
        xenctl vbd create -n0 -w -v2 -k6945314927

Write access is given to domain0 (-n0) so I can initialize it.  My assumption
here is that the <vdb_num> given to the -v option translates into /dev/xvda
for -v0, /dev/xvdb for -v1, etc.  Is that correct?

BTW, do I need to create a distinct virtual device which grants access to
the domain whose kernel is going to use the virtual disk as its root?
Currently I do not, I just set root=/dev/xvdN in xi_build where xvdN is the
device I create/use here for dom0 initialization.  Do I need to do a
"xenctl physical grant" for either the virtual block device or the
partition on which the virtual disks reside?

Moving on:

        xen_refresh_dev /dev/xvda
        xen_refresh_dev /dev/xvdb
        xen_refresh_dev /dev/xvdc

I read in the mailing list archive that this refresh is needed...
Now I run "fdisk -l" on each of them and get:

        Disk /dev/xvda: 255 heads, 63 sectors, 16 cylinders
        Disk /dev/xvdb: 255 heads, 63 sectors, 32 cylinders
        Disk /dev/xvdc: 255 heads, 63 sectors, 48 cylinders

Note the ever increasing number of cylinders.  This makes mkfs think that
xvdb and xvdc are larger than they really are.  mkfs does succeed, but you
get a lot of:

        DOM0: Bad return from blkdev data request: 1

on the console.  And if you try to fsck it or actually use it, you get
filesystem errors.  If I explicitly tell mkfs how big the partition is,
everything works fine.  So someone is just reporting the geometry wrong,
probably forgetting to subtract off a non-zero starting location.


-------------------------------------------------------
This SF.net email is sponsored by: IBM Linux Tutorials.
Become an expert in LINUX or just sharpen your skills.  Sign up for IBM's
Free Linux Tutorials.  Learn everything from the bash shell to sys admin.
Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.