[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] Parameterization vendor prefix and PCI device id

> -----Original Message-----
> From: win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx [mailto:win-pv-devel-
> bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Fabio Fantoni
> Sent: 09 September 2015 14:04
> To: Paul Durrant; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Stefano Stabellini
> Subject: Re: [win-pv-devel] Parameterization vendor prefix and PCI device id
> Il 09/09/2015 10:54, Paul Durrant ha scritto:
> > Hi,
> >
> >    My recent set of patches (one for each driver) to parameterize the device
> name vendor prefix and vendor PCI device id have a knock on effect on
> compatibility between drivers. This, however, is with good reason...
> >
> >    There was a recent incident where Windows PV drivers were accidentally
> posted to Windows Update (not by Citrix) using binding names in use by
> Citrix XenServer. To avoid this sort of accident it seems prudent to
> parameterize the names and these patches do that.
> >    The default vendor prefix has been set to 'XP' (Xen Project) rather than
> 'XS' (XenServer). Citrix will continue to use the 'XS' prefix by setting it 
> at build
> time, but for Xen Project development builds this means drivers built after
> these patches are applied will not be compatible with those before. The
> bindings for the XenServer vendor PCI device are also no longer hardcoded
> but this is unlikely to have an effect on most people unless they have been
> deliberately specifying this device in their VM config.
> >    I am also intending to back-port all the patches to the 8.1 branch and 
> > tag a
> new rc in the near future. I will mail out once the patches are there and the
> branches are tagged. With any luck I can also get rc builds posted to xenbits 
> in
> the near future; that may require a little infrastructure tweaking though.
> >
> >    Cheers,
> >
> >      Paul
> >
> >
> I thinked that was already default to xen project and that wasn't unable
> to install on device id different...
> But from this seems that the drivers can be installed manually also if
> emulated xen device in qemu have different id, is it right?

XENBUS will bind to the Xen platform device (namely 5853:0001), an old  
XenServer variant of that (5853:0002) or a 'vendor' device of which the only 
registered example is one for XenServer (5853:C000) to which you refer below...

> Another strange think is default device id in xen, looking libxl_dm.c
> the default seems nothing:
> > switch (b_info->u.hvm.vendor_device) {
> >              flexarray_append(dm_args, "-device");
> >              flexarray_append(dm_args, "xen-pvdevice,device-id=0xc000");
> >              break;
> >          default:
> >              break;
> >          }
> But from qemu patch:
> http://git.qemu.org/?p=qemu.git;a=commitdiff;h=539891a85d17bd8c23a254
> 7e52e26993350d2c3a
> Is also nothing the default and in commit description tell that should
> always specified by toolstack.
> Xen should set xen project id by default but actually don't do it, is it
> right?
>  From libxl_dm.c seems that can set correctly xen-pvdevice only for
> xenserver case.
> Should add -device xen-pvdevice,device-id=0x0001 on default based on
> http://xenbits.xen.org/docs/unstable-staging/misc/pci-device-
> reservations.txt
> or I'm wrong?

QEMU, as invoked via libxl, will always create the Xen platform device 
(5853:0001) and so nothing special is required in xl.cfg to give you a VM into 
which you can install PV drivers.

> Docs seems that write about also others possible id but actually
> u.hvm.vendor_device provide only none or xenserver.
> Added Stefano Stabellini on cc.

That is correct. No-one else has registered a vendor device.


> Thanks for any reply and sorry for my bad english.
> _______________________________________________
> win-pv-devel mailing list
> win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> http://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

win-pv-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.