[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] Question about xen disk unplug support for ahci missed in qemu



Il 16/10/2015 12:47, Stefano Stabellini ha scritto:
On Fri, 16 Oct 2015, Fabio Fantoni wrote:
Il 16/10/2015 12:13, Anthony PERARD ha scritto:
On Fri, Oct 16, 2015 at 10:32:44AM +0200, Fabio Fantoni wrote:
Il 15/10/2015 20:02, Anthony PERARD ha scritto:
On Thu, Oct 15, 2015 at 06:27:17PM +0200, Fabio Fantoni wrote:
Il 14/10/2015 13:06, Stefano Stabellini ha scritto:
I would suggest Fabio to avoid AHCI disks altogether and just use
OVMF
with PV disks only and Anthony's patch to libxl to avoid creating
any
IDE disks: http://marc.info/?l=xen-devel&m=144482080812353.

Would that work for you?
Thanks for the advice, I tried it:
https://github.com/Fantu/Xen/commits/rebase/m2r-testing-4.6

I installed W10 pro 64 bit with ide disk, installed the win pv drivers
and
after changed to xvdX instead hdX, is the only change needed, right?
Initial boot is ok (ovmf part about pv disks seems ok) but windows
boot
fails with problem with pv drivers.
In attachment full qemu log with xen_platform trace and domU's xl cfg.

Someone have windows domUs with ovmf and pv disks only working? If yes
can
tell me the difference to understand what can be the problem please?
When I worked on the PV disk implementation in OVMF, I was able to boot
a Windows 8 with pv disk only.

I don't have access to the guest configuration I was using, but I think
one
difference would be the viridian setting, I'm pretty sure I did not set
it.

I tried with viridian disabled but did the same thing, looking cdrom as
latest thing before xenbug trace in qemu log I tried also to remove it but
also in this case don't boot correctly, full qemu log in attachment.
I don't know if is a ovmf thing to improve (like what seems in Laszlo and
Kevin mails) or xen winpv drivers unexpected case, have you tried also
with
latest winpv builds? (for exclude regression)
No, I did not tried the latest winpv drivers.

Sorry I can help much more that that. When I install this win8 guest tried
to boot it with pv drivers only, that was more than a year ago. I have not
check if it's still working. (Also I can not try anything more recent,
right now.)

I did many other tests, retrying with ide first boot working but show pv
devices not working, I did another reboot (with ide) and pv devices was
working, after I retried with pv (xvdX) and boot correctly.
After other tests I found that with empty cdrom device (required for xl
cd-insert/cd-eject) boot stop at start (tianocore image), same result with ide
instead.
 From xl cfg:
disk=['/mnt/vm/disks/W10UEFI.disk1.cow-sn1,qcow2,xvda,rw',',raw,xvdb,ro,cdrom']
With seabios domU boot also with empty cdrom.
In qemu log I found only these that can be related:
xen be: qdisk-51728: error: Could not open image: No such file or directory
xen be: qdisk-51728: initialise() failed
And latest xl dmesg line is:
(d1) Invoking OVMF ...
If you need more informations/test tell me and I'll post them.
Are you saying that without any cdrom drives, it works correctly?
Yes, I did also another test to be sure, starting with ide, installing windows, the pv drivers, rebooting 2 times (with one at boot of time boot with ide only and without net and disks pv drivers working) and after rebooting with pv disks (xvdX) works. With cdrom not empty (with iso) works, with empty not, tried with both ide (hdX) and pv (xvdX).
Empty cdrom not working with ovmf I suppose is ovmf bug or inexpected case.
About major of winpv drivers problem at boot I suppose can be solved improving ovmf and winpv driver removing bad hybrid thing actually, but I have too low knowledge to be sure. About the problem of pv start after install that requiring at least 2 reboot can be also a windows 10 problem (only a suppose).

About empty cdrom with ovmf can be solved please?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.