[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] domU hangs when mkfs over a cryptsetup mapped from a file lying on rootfs



Hello Konrad Rzeszutek Wilk,

in this example the filesystem for the domU on the dom0 is a non ctypted
plain file on a physical, at the host over SATA present in a linux
software RAID1, hardisk. I also tried with a non crypted LVM instead of
plain files. The same result.

The cryptocontainer i did in the domU not on the host. On the host there
are no problems about this.

The host has the same kernel and the same Linux distribution, installed in
the same manner out of teh box.

Sincerely,
tVos

On Thu, August 15, 2013 15:35, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 14, 2013 at 06:57:31PM +0200, tVos wrote:
>> Hello xen-developers,
>>
>> i think i found a bug, so i report now everything i did and what
>> happened
>> and what i expected below, and let you tell me if this bug is confirmed.
>>
>> -----BEGIN BUG REPORT-----
>> Distro: Debian 7.1 Wheezy
>> XEN: xen-hypervisor-4.1-amd64          4.1.4-3+deb7u1
>> Kernel: 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux
>>
>>
>> First i created a new domU:
>> xen-create-image --hostname=testtest --ip=10.3.4.1 --arch=amd64
>> --dist=wheezy --size=30Gb --memory=256Mb --vcpus=1 --verbose --noswap
>> --dir=/media/xen
>>
>> I startet it and got into console with:
>> xm create testtest.cfg && xm console testtest
>>
>> I logged in and installed cryptsetup and gddrescue:
>> apt-get update && apt-get install cryptsetup gddrescue
>>
>> I created a 10GB file from /dev/zero:
>> ddrescue -b 4096 -s 10G /dev/zero ./testvolume
>>
>> Then I luksFormatet and mapped it:
>> cryptsetup luksFormat ./testvolume
>> cryptsetup luksOpen ./testvolume testvolume
>>
>> *** Until here everything was normal and as expected. ***
>>
>> Now i tried to mkfs the mapped crypted file:
>> mkfs -t ext3 /dev/mapper/testvolume
>>
>> the mkfs.ext3 got until "Writing inode tables: 22/75" and got stuck
>> there.
>> The domU hangsup in a way that it doesn't ract anymore. No deamon
>> reacts,
>> getty don't react anymore, and sometimes the dmesg of the domU says,
>> that
>> a process did hang more than 120sec.
>>
>> xentop and xm list says the domU are in blocking mode, the other domUs
>> continue working normaly and the host also works normaly.
>>
>> no messages concerning this in host dmesg nor in xen dmesg. Nothing in
>> the
>> logfiles.
>>
>> The only way to recover the domU is waiting a _very_ long period of
>> time,
>> usually 12-48h then it suddenly continues and finish the mkfs.ext; or to
>> xm destroy the domU and create it again.
>>
>> I tested this on my Xen setup as well as on other similar XEN setups,
>> and
>> diferent parameters such like guestroofs on a file, or als volumegroup,
>> more or less RAM, more or less disk, more or less CPUs, with and without
>> swap.
>>
>> All setups are standart debian setups out of the box.
>> -----END BUG REPORT----
>>
>> Please tell me what more infos you need, i will try to provide it asap.
>> Also please tell me if i should try somthing or reconfigure something
>> and
>> try this scenario again.
>
> This looks like another issue that had been reported in the past
> where the guest image was created on top of a crypted LVM. And when
> they tried to scp a large file to it would hang. They were using 2.6.32
> as guest and 3.5 as dom0.
>
> But you are doing it a bit differently (I think). Could you explain to
> me what is the underlaying storage for the guest? I see --dir=/media/xen.
>
> Is that over NFS? Or is that a local file?
>>
>> Assistance will be apreciated. Many thanks in advance.
>>
>> Sincerely,
>> tVos
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.