[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxl: Don't insert PCI device into xenstore for HVM guests
On Fri, May 29, 2015 at 10:54:09AM +0100, Ross Lagerwall wrote: > On 05/29/2015 10:50 AM, Wei Liu wrote: > >On Fri, May 29, 2015 at 10:43:08AM +0100, Ross Lagerwall wrote: > >>On 05/29/2015 10:41 AM, Wei Liu wrote: > >>>On Fri, May 29, 2015 at 08:59:45AM +0100, Ross Lagerwall wrote: > >>>>When doing passthrough of a PCI device for an HVM guest, don't insert > >>>>the device into xenstore, otherwise pciback attempts to use it which > >>>>conflicts with QEMU. > >>>> > >>>>This manifests itself such that the first time a device is passed to a > >>>>domain, it succeeds. Subsequent attempts fail unless the device is > >>>>unbound from pciback or the machine rebooted. > >>>> > >>> > >>>The commit message looks sensible to me. However this might break > >>>qemu-trad PCI passthrough if I'm not mistaken. What QEMU version are you > >>>using? Upstream or trad? Have you tested both of them? > >>> > >> > >>qemu-trad. I haven't tested with qemu-upstream. > >> > > > >I don't quite get this. Doesn't qemu-trad depends on those xenstore > >nodes for PCI passthrough information? What did I miss? > > > > A different set of xenstore keys are used for communication between libxl > and QEMU. The communication between libxl and QEMU happens further up in the > same function: > http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/libxl/libxl_pci.c;h=e0743f8112689b340ba7de88bc8895b62105aaba;hb=HEAD#l901 > OK. Now I get the idea. IMHO this piece of code is not in a very good state. The problem is the way it works is very fragile. Now we have three functions, each of which has partial responsibility of writing some xenstore nodes. This is not your fault. Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> > Regards, > -- > Ross Lagerwall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |