[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] booting from HVM (?pv?)

> On 06/25/2013 11:57 AM, AL13N wrote:
>>> On 06/25/2013 11:47 AM, AL13N wrote:
>>>>> On Mon, Jun 24, 2013 at 9:21 PM, AL13N <alien@xxxxxxxx> wrote:
>>>>>> Hi,
>>>>>> I'm the Mageia XEN packager and during QA, we stumbled into a
>>>>>> problem.
>>>>>> in fact, we wanted to test Mageia 3 installation on a HVM.
>>>>>> so, we had a sparse image and a iso file:
>>>>>> [ 'file:/opt/testhvm.img,sda,w',
>>>>>> 'file:/opt/mageialive.iso,hdb:cdrom,r'
>>>>>> ]
>>>>>> the live booted, and was able to install to disk, but it never
>>>>>> seemed
>>>>>> to
>>>>>> boot
>>>>>> after the install... (for some reason)
>>>>>> in the end, this "worked" when we changed to:
>>>>>> [ 'file:/opt/testhvm.img,xvda,w', 'file:/opt/testhvm.img,sda,w' ]
>>>>>> apparently 'xvda' to get grub to boot the kernel; and 'sda' so that
>>>>>> it
>>>>>> could
>>>>>> start...
>>>>> My HVM Linux config files have 'hda' instead of 'sda' -- can you try
>>>>> that instead?
>>>>> Normally what happens is that qemu begins by exposing the hda device
>>>>> to the guest, to boot via grub; but when the Xen PV drivers in the
>>>>> Linux kernel come up, they write to a magic port which causes the
>>>>> physical hda device to disappear.  I *think* then that the Xen PV
>>>>> drivers actually take over that major/minor, so that further reads
>>>>> and
>>>>> writes to the hda go through the PV protocol instead.
>>>>> All of this might get mixed up if you're using sda instead.
>>>>>    -George
>>>> well, hda is what we tried first, but we couldn't see the disk from
>>>> the
>>>> live system with hda. only sda seemed to work for that...
>>> When you say, "we couldn't see the disk from the live system", you're
>>> talking about booting from the live CD?
>>> If post-install you try 'hda' and it boots (after changing the grub and
>>> /etc/fstab if necessary), then I would suspect that there's a problem
>>> with your live CD kernel.  Do you use a different kernel image, and/or
>>> are the modules for the Xen PV devices not included?
>> what i mean is:
>> 1st try was using hda en hdb:cdrom with an iso and boot=dc
> Hmm, I think we normally use hdc for the cdrom instead of hdb.  hda and
> hdb would be on the same controller, I think; so it's probably not
> possible to unplug hda without also unplugging hdb.  In that case,
> having the PV drivers loaded could actually make things worse: if they
> successfully unplug hda, then they lose the cdrom; if they don't, then
> the PV drivers may be in a weird state where they've grabbed the hda
> device but aren't able to provide access to the disk anymore.

we'll retry with hdc:cdrom i didn't think of a different controller issue.

btw: is xenblk_front used for this, or something ide_generic ?

(we didn't experience any issue with the cdrom at least)

>> this booted from the livecd, but the disk was not visible, even though
>> xenblk_front was loaded (does hvm use a different module for emulation?
>> there were some ide modules loaded as well).
>> this is one thing we wanted to get right when having Mageia to be
>> xen-ready (ie: installable on HVM from iso)
> Absolutely -- I'm just exploring configuration / guest kernel setups
> that can be tweaked, particularly since at the moment we're pretty much
> locked down for 4.3, so any changes won't be in a public release until
> 4.4 in (probably) 6 months' time.
>> We'll try using hda postinstall...
>> PS: one weird thing that i saw in the log files, was a message about
>> stripping tap from device, which was odd, since file:/... was specified
>> and not tap:aio:... (and Mageia 3 doesn't have a specific -xen kernel,
>> nor
>> a blktap-dkms...)
> If you could post the exact message, we could find out where the warning
> was coming from and see if it's something to worry about.

i don't have an exact message, but i think i've seen this before on other
XENs when you specify tap:aio: as backend device...

i'll see if we can refind this message

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.