[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RE: [Xen-changelog] [xen-unstable] xend: hot-plug PCI devices at boot-time



Hi,

is an alternative to just revert 19754 and allow
duplicate calls to setupOneDevice() in the HVM case?

On Tue, Jul 28, 2009 at 02:47:16PM +0800, Cui, Dexuan wrote:
> Hi Simon, 
> > I think that a simple solution to this is to just remove the first 
> > invocation.
> This was checked in as c/s 19754.
> Unluckily, this breaks device assignment for pv guest: xend would not invoke 
> setupOneDevice() for pv guest at all.
> 
> The attached patch fixes the issue.  Please have a look.
> 
> Thanks,
> -- Dexuan
> 
> 
> 
> -----Original Message-----
> From: Simon Horman [mailto:horms@xxxxxxxxxxxx] 
> Sent: 2009?6?15? 9:53
> To: Cui, Dexuan
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] RE: [Xen-changelog] [xen-unstable] xend: hot-plug 
> PCI devices at boot-time
> 
> On Fri, Jun 12, 2009 at 02:35:02PM +0800, Cui, Dexuan wrote:
> > On Fri, Jun 12, 2009 at 14:34, Simon Horman wrote:
> > > On Fri, Jun 12, 2009 at 01:51:10PM +0800, Cui, Dexuan wrote:
> > > > Hi Simon,
> > > > After this changeset, I find there are some new issues in the xend:
> > > > I noticed in xend.log, setupOneDevice() is invoked twice,
> > > > but actually I only statically assign 1 device to hvm guest.
> > > > 
> > > > After looking into the xend code, I find in XendDomainInfo.py:
> > > > _initDomain() -> _createDevices(), we invoke
> > > > self._createDevice(devclass, config) that eventually invokes
> > > > setupOneDevice() -- this is the first time;
> > > > And later, still in  _createDevices(), we invoke
> > > > pci_device_configure_boot() -> pci_device_configure() ->
> > > > dev_control.reconfigureDevice(devid, dev_config) ->
> > > > xend/server/pciif.py:reconfigureDevice() -> setupOneDevice()
> > > > --  this is the second time.  Can you remove the duplicate invocation?
> > > 
> > > Sure, I will look into it ASAP.
> > 
> > > Can I confirm which version of xen-unstable.hg and qemu-xen-unstable.git
> > > you are using?
> > I'm using the latest xen-unstable 19740, Dom0 898, ioemu
> > e0bb6b8df60863bca0163a1688baf4854e931e55.
> 
> Hi Dexuan,
> 
> I think that a simple solution to this is to just remove the
> first invocation.
> 
> -----------------------------------------------------------------------
> 
> xend: pass-through: Only call setupOneDevice() once per device
> 
> As observed by Dexuan Cui, when PCI devices are passed through at
> domain-creation-time setupOneDevice() will be called twice.
> 
> Once via setupDevice() and once via econfigureDevice() which
> is called in pci_device_configure().
> 
> This patch removes the first of these.
> 
> Cc: Dexuan Cui <dexuan.cui@xxxxxxxxx>
> Cc: Masaki Kanno <kanno.masaki@xxxxxxxxxxxxxx>
> Signed-off-by: Simon Horman <horms@xxxxxxxxxxxx>
> 
> Index: xen-unstable.hg/tools/python/xen/xend/server/pciif.py
> ===================================================================
> --- xen-unstable.hg.orig/tools/python/xen/xend/server/pciif.py        
> 2009-06-15 11:24:00.000000000 +1000
> +++ xen-unstable.hg/tools/python/xen/xend/server/pciif.py     2009-06-15 
> 11:24:02.000000000 +1000
> @@ -436,8 +436,6 @@ class PciController(DevController):
>                                      ' same guest with %s'
>                                  raise VmError(err_msg % (s, dev.name))
>  
> -        for (domain, bus, slot, func) in pci_dev_list:
> -            self.setupOneDevice(domain, bus, slot, func)
>          wPath = '/local/domain/0/backend/pci/%u/0/aerState' % 
> (self.getDomid())
>          self.aerStateWatch = xswatch(wPath, self._handleAerStateWatch)
>          log.debug('pci: register aer watch %s', wPath)


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.