[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: [PATCH 1/3] Enhance platform support for PCI



>>> On 17.03.15 at 13:06, <mjaggi@xxxxxxxxxxxxxxxxxx> wrote:

> On Tuesday 17 March 2015 12:58 PM, Jan Beulich wrote:
>>>>> On 17.03.15 at 06:26, <mjaggi@xxxxxxxxxxxxxxxxxx> wrote:
>>> In drivers/xen/pci.c on notification BUS_NOTIFY_ADD_DEVICE dom0 issues a
>>> hypercall to inform xen that a new pci device has been added.
>>> If we were to inform xen about a new pci bus that is added there are  2 ways
>>> a) Issue the hypercall from drivers/pci/probe.c
>>> b) When a new device is found (BUS_NOTIFY_ADD_DEVICE) issue
>>> PHYSDEVOP_pci_device_add hypercall to xen, if xen does not finds that
>>> segment number (s_bdf), it will return an error
>>> SEG_NO_NOT_FOUND. After that the linux xen code could issue the
>>> PHYSDEVOP_pci_host_bridge_add hypercall.
>>>
>>> I think (b) can be done with minimal code changes. What do you think ?
>> I'm pretty sure (a) would even be refused by the maintainers, unless
>> there already is a notification being sent. As to (b) - kernel code could
>> keep track of which segment/bus pairs it informed Xen about, and
>> hence wouldn't even need to wait for an error to be returned from
>> the device-add request (which in your proposal would need to be re-
>> issued after the host-bridge-add).
> Have a query on the CFG space address to be passed as hypercall parameter.
> The of_pci_get_host_bridge_resource only parses the ranges property and 
> not reg.
> reg property has the CFG space address, which is usually stored in 
> private pci host controller driver structures.
> 
> so pci_dev 's parent pci_bus would not have that info.
> One way is to add a method in struct pci_ops but not sure it will be 
> accepted or not.

I'm afraid I don't understand what you're trying to tell me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.