[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] kpartx for raisin hvm tests
On Wed, 1 Mar 2017, Gémes Géza wrote: > 2017-02-27 23:52 keltezéssel, Stefano Stabellini írta: > > On Wed, 22 Feb 2017, Géza Gémes wrote: > > > On 2017-02-21 23:10, Stefano Stabellini wrote: > > > > On Tue, 21 Feb 2017, Géza Gémes wrote: > > > > > Hi, > > > > > > > > > > I've tried to run the raisin test suite, while pv tests pass the hvm > > > > > tests > > > > > fail. I've identified a number of problems, starting with two small > > > > > disk > > > > > size > > > > > to formating the whole disk and then being unable to install grub to > > > > > the > > > > > boot > > > > > sector. I've trace down these problems into the incorrect invocation > > > > > of > > > > > the > > > > > _create_loop_device function in scripts/lopartsetup. > > > > > > > > > > My question: Will it be acceptable if I would replace this part of the > > > > > code > > > > > with a kpartx call? Or introducing kpartx is a too big change in the > > > > > list > > > > > of > > > > > dependencies? > > > > I understand that kpartx makes things much easier, but before > > > > introducing it as a dependency, I would like to understand this problem > > > > a bit better. > > > > > > > > Why is _create_loop_device invoked incorrectly? Is it index or offset > > > > that is calculate incorrectly? > > > Hi Stefano, > > > > > > In scripts/lopartsetup:56 unit="`fdisk -lu $filename 2>/dev/null | grep -e > > > "^Units = " | cut -d " " -f 9`" . Using ubuntu 16.04 (fdisk coming from > > > util-linux-2.27.1-6ubuntu3.2) this yields to an empty variable, as: > > > > > > $sudo fdisk -lu /tmp/tmp.x9UN6uxaG2/busybox-vm-disk 2>/dev/null > > > > > > Disk /tmp/tmp.x9UN6uxaG2/busybox-vm-disk: 60 MiB, 62914560 bytes, 122880 > > > sectors > > > Units: sectors of 1 * 512 = 512 bytes > > > Sector size (logical/physical): 512 bytes / 512 bytes > > > I/O size (minimum/optimal): 512 bytes / 512 bytes > > > > > > Because of this both unit and offset are wrong ( offset=`echo $i | tr -s " > > > " | > > > cut -d " " -f 2`, where i=fdisk -lu $filename 2>/dev/null | grep -e > > > "^$filename") > > > > > > As I think that different versions of fdisk will produce different > > > results, we > > > either introduce an additional logic for the fdisk version, either change > > > this > > > part completely. > > It doesn't look like fdisk changed output in this case. It looks like > > the disk doesn't have any partitions in it. Am I right? > > > > It would be easy to add support to lopartsetup to detect disks without > > partitions, and deal with them correctly, without bringing in kpartx. > > However, this scenario shouldn't occur, because lopartsetup is only > > called by create_one_partition, right after creating a partition on the > > disk. > > > > Do you know why create_one_partition doesn't work as expected? > > Hi Stefano, > > Sorry for the late answer. The only change I've did on Ubuntu 16.04 was to > increase the hvm disk size to 60 MB. > > In the meanwhile I set up an ubuntu 14.04 test system and for the first time > tried to run the tests as an ordinary user instead of root. I've found a set > of problems. I've made a patch > (https://github.com/geza-gemes/raisin/commit/8a1227d96697a4d8be9130fd9b16404decbe7605) > for those. That's a good patch, thank you. Could you please submit it to xen-devel? > Although this fixes the problem of running the tests as non-root, > it turned out, that even on ubuntu 14.04 the 20MB disk is not enough for the > hvm guest. I'll try to find the suitable disk size there and if successful > I'll move back to ubuntu 16.04. The tiny disk size was only meant to be used for busybox guests. Also, I still don't understand what's wrong with the fdisk code. Could you please check if the partition table has been setup correctly on the disk? In other words, does create_one_partition work correctly? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |