[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] dom0 kernel - irq nobody cared ... the continuing saga ..



>>> Sander Eikelenboom <linux@xxxxxxxxxxxxxx> 02/10/15 6:30 PM >>>
>Tuesday, February 10, 2015, 5:22:16 PM, you wrote:
>>>>> Sander Eikelenboom <linux@xxxxxxxxxxxxxx> 02/10/15 5:01 PM >>>
>>>I haven't checked the call chain of xen_pcibk_do_op .. but that could be a
>>>side effect of libxl not imitating pci-front good enough (since HVM guests
>>>don't use the pci-front driver, but instead rely on libxl and Qemu to play
>>>those parts.
>
>> I thought the frontend functionality was entirely in qemu. Does this behave
>> identical between qemu-trad and qemu-upstream?
>
>AFIAK yes, just tested and also with qemu-trad the xenstore front end pci 
>entry 
>stays state "1" .. and therefor pciback doesn't get further then state "3".
>
>qemu doesn't do xenbus itself (at least not for pci)
>libxl isn't notified by qemu that the device is setup (nor has the 
>functionality
>in place to actually change the xenstore entry for this case) 
>
>So nothing happens.
>
>>>One of the issues i already mentioned that devices never get to the state 
>>>connected, for HVM guests. The backend state stays 3 (XenbusStateInitialised)
>>>and the frontend state stays 1 (XenbusStateInitialising).
>
>> That sounds wrong too - not sure from where this state are driven for
>> pcifront (see above).
>
>For pv-guests, pci-back and pci-front mostly do there own dance driven by those
>xenbustate changes. Libxl for PV guests even waits for the xenbusstate to 
>become 
>"4" aka connected in libxl_pci.c.
>
>For HVM guests it's really up to libxl at the moment, but it lacks in multiple 
>fashions. (probably it worked with xend but these things didn't get ported 
>properly to libxl yet, probably because from a glance everything seems to work 
>fine).

So let's ask the tool stack and qemu maintainers (now Cc-ed) about how this is
supposed to work, and how much of that is known to actually be implemented.

Jan

>But when you try to mimic it by hand after the guest has started
>(just write to the xenstore entry with xenstore-write and up it from state "1" 
> 
>to state "2" or state "3", with:
>
>xenstore-write /local/domain/1/device/pci/0/state 2 
>or 
>xenstore-write /local/domain/1/device/pci/0/state 3 
>
>You will notice xenpciback responds, but it tries to read some pci-front 
>config 
>directly, but that fails since there is no pcifront connection.
>[  926.050193] xen-pciback pci-1-0: fe state changed 3
>[  926.050964] xen-pciback pci-1-0: Reading frontend config
>[  926.051189] xen-pciback pci-1-0: 2 Error reading configuration from frontend
>
>So that should probably be skipped.
>
>with:
>xenstore-write /local/domain/1/device/pci/0/state 4
>
>we get: 
>[ 1053.062003] xen-pciback pci-1-0: fe state changed 4
>
>Now the "root" pciback entry state is also changed to 4 ..
>however the individual device entries underneath are still state 3.
>
>Apart from that most of the xenstore-watches in the code seem to be on that 
>root 
>entry and not on the individual devices, which is probably going to lead to 
>problems when only removing one of the passed through devices from a given 
>guest
>(when the rest would be working properly).
>
>So it's a multi-headed beast .. and i don't know if the way it's put together 
>and if the limited available states (and the limitation of xenstore-watches)
>are actually enough to make this work properly.
>The only spec about which side is supposed to do what when (and what 
>when it fails) ... seems to be the xen-pciback and pci-front code. 
>
>Not to mention backward compatibility (it's broken but it does work)
>
>So it is far above my limited c-skills (i'm able to read to a certain extent 
>but not write)
>and lack of knowledge about the libxl-ishms with respect to timeouts callbacks
>and all that kind of stuff.
>
>--
>Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.