[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] HVM domU on storage driver domain



On Sat, Feb 25, 2017 at 12:07 AM, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> On Fri, Feb 24, 2017 at 10:13:53PM +0800, G.R. wrote:
>> On Thu, Feb 23, 2017 at 8:44 PM, Roger Pau Monné <roger.pau@xxxxxxxxxx> 
>> wrote:
>> >> libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm: Spawning 
>> >> device-model /usr/local/lib/xen/bin/qemu-dm with arguments:
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   
>> >> /usr/local/lib/xen/bin/qemu-dm
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   -d
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   92
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   -domain-name
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   ruibox-dm
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   -vnc
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   0.0.0.0:0
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   -vncunused
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   -M
>> >> libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm:   xenpv
>> >> libxl: debug: libxl_dm.c:2098:libxl__spawn_local_dm: Spawning 
>> >> device-model /usr/local/lib/xen/bin/qemu-dm with additional environment:
>> >> libxl: debug: libxl_dm.c:2100:libxl__spawn_local_dm:   
>> >> XEN_QEMU_CONSOLE_LIMIT=1048576
> Hm, AFAICT the problem seem to be that there's a device model launched to 
> serve
> the stubdomain, and that's completely wrong. The stubdomain shouldn't require 
> a
> device model at all IMHO.
I don't think I agree with your judgment.
With stubdom config, I could see a new domain launched along with the
domU in question:
Name                                        ID   Mem VCPUs    State    Time(s)
Domain-0                                     0  1024     6     r-----     137.0
nas                                          1  7936     2     -b----     444.0
ruibox                                       6  2047     1     --p---       0.0
ruibox-dm                                    7    32     1     -b----       0.0

The 'ruibox-dm' name is referenced in the qemu-dm parameter log.
I think this comes from spawn_stub_launch_dm() which calls
libxl__spawn_local_dm() and prints out the log in question.

Also the debug patch you created are not being triggered at all.
The libxl__need_xenpv_qemu() function you touched are called in two situations:
1. in domcreate_launch_dm() for PV path, which is not applicable here
since it's HVM.
2. Indirectly called through libxl__dm_check_start() (likely device
hotplug related).

For #2, I don't see libxl_create.c call it directly.

>> The background is that, as I mentioned in the very beginning of this
>> thread, I'm also trying to use local qemu device model + driver domain
>> by NFS mounting the remote disk image.
>> The domU appears to go through the BIOS boot into windows while get
>> stuck in the boot screen.
>> According to Paul the win-pv-driver owner, the debug log looks just
>> fine and it's probably that the back-end that is not working properly.
>> Email thread could be found here:
>> https://lists.xen.org/archives/html/win-pv-devel/2017-02/msg00027.html
>
> Is the NFS share on a guest on the same host? I remember issues when trying to
> do NFS from a guest and mounting the share on the Dom0.
>
Yes, NFS share exposed from the freeNAS domU on the same host.
It has to be that case since the same domU is being used as storage
driver domain in my experiment here.
I also experienced issue when mounting the share on Dom0 before.
The issue was discussed in this list years ago and is root caused to
be kernel config related.
I haven't see any other issue after fixing the kernel config.
In case you are interested in the background:
https://lists.xen.org/archives/html/xen-devel/2013-04/msg01302.html

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.