[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Pass-through in Xen ARM - Draft 2.

On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
Hi Manish,

On 28/06/15 19:38, Manish Jaggi wrote:
4.1 Holes in guest memory space
Holes are added in the guest memory space for mapping pci device's BAR
These are defined in arch-arm.h

/* For 32bit */
/* For 64bit */
The memory layout for 32bit and 64bit are exactly the same. Why do you
need to differ here?
I think Ian has already replied. I will change the name of macro
4.2 New entries in xenstore for device BARs
toolkit also updates the xenstore information for the device
(virtualbar:physical bar).
This information is read by xenpciback and returned to the pcifront
driver configuration
space accesses.
Can you details what do you plan to put in xenstore and how?
It is implementation . But I plan to put under domU / device / heirarchy
Actually, xenstore is an API of sorts which needs to be maintained going
forward (since front and backend can evolve separately, so it does need
some level of design and documentation.

What about the expansion ROM?
Do you want to put some restriction on not using expansion ROM as a
passthrough device.
"expansion ROM as a passthrough device" doesn't make sense to me,
passthrough devices may _have_ an expansion ROM.

The expansion ROM is just another BAR. I don't know how pcifront/back
deal with those today on PV x86, but I see no reason for ARM to deviate.

4.3 Hypercall for bdf mapping notification to xen
#define PHYSDEVOP_map_sbdf              43
typedef struct {
      u32 s;
      u8 b;
      u8 df;
      u16 res;
} sbdf_t;
struct physdev_map_sbdf {
      int domain_id;
      sbdf_t    sbdf;
      sbdf_t    gsbdf;

Each domain has a pdev list, which contains the list of all pci devices.
pdev structure already has a sbdf information. The arch_pci_dev is
updated to
contain the gsbdf information. (gs- guest segment id)

Whenever there is trap from guest or an interrupt has to be injected,
the pdev
list is iterated to find the gsbdf.
Can you give more background for this section? i.e:
        - Why do you need this?
        - How xen will translate the gbdf to a vDeviceID?
In the context of the hypercall processing.
        - Who will call this hypercall?
        - Why not setting the gsbdf when the device is assigned?
Can the maintainer of the pciback suggest an alternate.
That's not me, but I don't think this belongs here, I think it can be
done from the toolstack. If you think not then please explain what
information the toolstack doesn't have in its possession which prevents
this mapping from being done there.
The toolstack does not have the guest sbdf information. I could only find it in xenpciback.

The answer to your question is that I have only found a place to issue
the hypercall where
all the information can be located is the function


+       /*Issue Hypercall here */
+#ifdef CONFIG_ARM64
+       map_sbdf.domain_id = pdev->xdev->otherend_id;
+       map_sbdf.sbdf_s = dev->bus->domain_nr;
+       map_sbdf.sbdf_b = dev->bus->number;
+       map_sbdf.sbdf_d = dev->devfn>>3;
+       map_sbdf.sbdf_f = dev->devfn & 0x7;
+       map_sbdf.gsbdf_s = 0;
+       map_sbdf.gsbdf_b = 0;
+       map_sbdf.gsbdf_d = slot;
+       map_sbdf.gsbdf_f = dev->devfn & 0x7;
+       pr_info("## sbdf = %d:%d:%d.%d g_sbdf %d:%d:%d.%d \
+                       domain_id=%d ##\r\n",
+                       map_sbdf.sbdf_s,
+                       map_sbdf.sbdf_b,
+                       map_sbdf.sbdf_d,
+                       map_sbdf.sbdf_f,
+                       map_sbdf.gsbdf_s,
+                       map_sbdf.gsbdf_b,
+                       map_sbdf.gsbdf_d,
+                       map_sbdf.gsbdf_f,
+                       map_sbdf.domain_id);
+       err = HYPERVISOR_physdev_op(PHYSDEVOP_map_sbdf, &map_sbdf);
+       if (err)
+               printk(KERN_ERR " Xen Error PHYSDEVOP_map_sbdf");


Xen-devel mailing list

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.