[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] pci dev config issue



I saw the pci dev's config is different from vbd/vnif with following comments:

 # Parsing the device SXP's. In most cases, the SXP looks
 # like this:
 #
 # [device, [vif, [mac, xx:xx:xx:xx:xx:xx], [ip 1.3.4.5]]]
 #
 # However, for PCI devices it looks like this:
 #
 # [device, [pci, [dev, [domain, 0], [bus, 0], [slot, 1]]]]
 #
 # It seems the reasoning for this difference is because
 # pciif.py needs all the PCI device configurations at
 # the same time when creating the devices.

So multiple pci devices sit in one single config entry with single uuid(see 
following configs), which make device handling difficult(consider if support 
hotplug).

Can anybody explain why pciif.py needs all pci devs configured at one time?
Is it still valid now? If not, can I remove this limitation?
 

=============== multiple vbd config ===================
 vbd = ""
  769 = ""
   virtual-device = "769"
   device-type = "disk"
   protocol = "x86_32-abi"
   backend-id = "0"
   state = "4"
   backend = "/local/domain/0/backend/vbd/6/769"
   ring-ref = "9"
   event-channel = "7"
  833 = ""
   virtual-device = "833"
   device-type = "disk"
   protocol = "x86_32-abi"
   backend-id = "0"
   state = "3"
   backend = "/local/domain/0/backend/vbd/6/833"
   ring-ref = "790"
   event-channel = "9"


================ multiple pci config ===================
 pci = ""
  8 = ""
   0 = ""
    domain = "ExampleDomain"
    frontend = "/local/domain/8/device/pci/0"
    uuid = "7f2dc1a1-d0de-ebf0-37fd-67ff0103c3c9"
    dev-1 = "0000:03:00.00"
    dev-0 = "0000:02:00.00"
    state = "4"
    online = "1"
    frontend-id = "8"
    num_devs = "2"
    root-0 = "0000:00"
    root_num = "1"
 

 
-- 
best rgds,
edwin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.