[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re; Dom0 Kernels ..



Hi,

My understanding is that "file" is deprecated as it runs on a loopback 
filesystem which does not flush it's cache until unmountd, which can cause 
massive data loss if the host crashes.

Not that it makes any difference to the gluster problem, but I actually need 
ioemu because I'm running software raid inside the DomU and it needs the 
partition types to auto-mount the root-filesystem off /dev/md0.

If my understanding is correct, I'm using PV, has HVM is used for Windows (?)

My other problem is also probably a "file:" issue.

If I run on file: (which does work) then kill off a gluster server, the DomU 
correctly fails a drive.

Ideally I would then use xm block-detach and xm block attach to "unfail" the 
device and "mdadm --re-add /dev/md0 /dev/xvda1" to get the raid re-syncing. 
However, xm block-detach fails to work properly .. eventually it will kill off 
the loopback device but it won't unregister the "xvda" device from the kernel 
.. so you can re-attach the filesyste, as say xvdc, but this is very messy. 
After enough reboots one would run out of devices ...

(!)

AIO is doing *something* gluster can't handle .. if I could find out what it is 
I'd stand a chance of fixing it.

Unfortunately I don't know enough about fuse or AIO to even guess beyond 
confirming it's not an "async" or "directiO" issue as far as I can tell.

----- Original Message -----
step 3.: "Tait Clarridge" <Tait.Clarridge@xxxxxxxxxxxx>
To: "Gareth Bult" <gareth@xxxxxxxxxxxxx>
Cc: "Xen-Users" <xen-users@xxxxxxxxxxxxxxxxxxx>, "gluster-devel Glister Devel 
List" <gluster-devel@xxxxxxxxxx>
Sent: 31 January 2008 18:34:09 o'clock (GMT) Europe/London
Subject: RE: [Xen-users] Re; Dom0 Kernels ..

Hi Gareth,

To my knowledge the tap:aio stuff is deprecated now? I thought I read that 
somewhere on the mailing list but I could be wrong.

I am not very saavy in the whole Gluster thing, I usually stick to standard 
'file:/path/to/image.img,ioemu:hda,w' when creating HVM VMs and drop the IOEMU 
stuff when creating PV linux domains. 

All of my PV domains use EXT3. Are you running the domain PV or HVM?

-Tait


From: Gareth Bult [mailto:gareth@xxxxxxxxxxxxx] 
Sent: January-31-08 1:23 PM
To: Tait Clarridge
Cc: Xen-Users; gluster-devel Glister Devel List
Subject: Re: [Xen-users] Re; Dom0 Kernels ..

Ok,

I now have 2.6.21 up and running, seems to be ok in itself.

When I start my DomU, if I use AIO on local volumes, it's fine .. if I run it 
on Gluster, it still hangs.
(no only does it hang the DomU, it also hangs xenwatchd so no more DomU's will 
start - needs a reboot to fix)

Here's what I see when I boot;

NET: Registered protocol family 1
NET: Registered protocol family 17
xen-vbd: registered block device major 202
 xvda: xvda1
 xvdb:<4>XENBUS: Timeout connecting to device: device/vif/0 (state 1)
XENBUS: Device with no driver: device/console/0
Freeing unused kernel memory: 192k freed
Loading, please wait...

My configuration file looks like this;

kernel      = '/boot/vmlinuz-2.6.21-prep'
ramdisk     = '/boot/initrd.img-2.6.21-prep'
memory      = '256'
root        = '/dev/md0 ro'
disk        = [ 'tap:aio:/vols/images/domains/test2/disk.img,ioemu:xvda,w' , 
'tap:aio:/cluster/images1/domains/test2/disk2.img,ioemu:xvdb,w' ]
name        = 'test2'
vif         = [ 'ip=10.0.0.23,mac=00:00:10:00:00:23' ]
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'
console     = 'xvc0'

I get exactly the same results irrespective of whether the glusterfs is mounted 
with directIO enabled or disabled.
FILE based access (rather than AIO) works just fine.

Any ideas?


----- Original Message -----
step 3.: "Tait Clarridge" <Tait.Clarridge@xxxxxxxxxxxx>
To: "Gareth Bult" <gareth@xxxxxxxxxxxxx>, "Xen-Users" 
<xen-users@xxxxxxxxxxxxxxxxxxx>
Sent: 31 January 2008 14:32:38 o'clock (GMT) Europe/London
Subject: RE: [Xen-users] Re; Dom0 Kernels ..


Hello,
 
I am running a 2.6.21 kernel with xen in dom0 and it has been rock solid. What 
I did was build Xen 3.2 with the default kernel (to get all the tools etc) then 
downloaded the mercurial source of a 2.6.21 kernel.
 
They can be found at http://hg.et.redhat.com/kernel-dev/ i used the 
ehabkost/linux-2.6.21-xen-3.1.0 kernel and it has worked really well.
 
When you are configuring the kernel ensure that you enable Xen support (it is 
one of the options in the Processor Type and Features heading) then scroll down 
to the bottom of the config subsections on the first page and a new XEN heading 
will be available. Enable Hypervisor Support (or it is something that will be 
similar I donât remember the exact wording).
 
I had to mess around with a few options before it started working right, my 
kernel has all the RAID and LVM stuff built in (no modules) and make sure that 
most SCSI stuff, USB controller stuff and AHCI is built in as a module.
 
Let me know how it goes, you might need to throw a few extra options while 
creating the ramdisk so if you run into problems... post emâ :)
 
Best of Luck,
Tait
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Gareth Bult
Sent: January-31-08 5:28 AM
To: Xen-Users
Subject: [Xen-users] Re; Dom0 Kernels ..
 
Hi,

I need some features from 2.6.19+ in my Dom0 .. is this an impossible task for 
the moment or does anyone have a stable kernel source tree more advanced thann 
2.6.18.8 that I can access?

Ubuntu 2.6.22 is unstable .. pure source build 2.6.18.8 is great .. (with 
Xen3.2)

What I'd like is to source build a 2.6.xx where xx > 18 ... can anyone point me 
in the right direction ?

tia
Gareth.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.