[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Pass-through in Xen ARM - Draft 2.

On Thursday 30 July 2015 08:09 PM, Ian Campbell wrote:
On Thu, 2015-07-30 at 18:21 +0530, Manish Jaggi wrote:
On Thursday 30 July 2015 03:24 PM, Ian Campbell wrote:
On Wed, 2015-07-29 at 15:07 +0530, Manish Jaggi wrote:
On Monday 06 July 2015 03:50 PM, Ian Campbell wrote:
On Mon, 2015-07-06 at 15:36 +0530, Manish Jaggi wrote:
On Monday 06 July 2015 02:41 PM, Ian Campbell wrote:
On Sun, 2015-07-05 at 11:25 +0530, Manish Jaggi wrote:
On Monday 29 June 2015 04:01 PM, Julien Grall wrote:
Hi Manish,

On 28/06/15 19:38, Manish Jaggi wrote:
4.1 Holes in guest memory space
Holes are added in the guest memory space for mapping pci
device's BAR
These are defined in arch-arm.h

/* For 32bit */
/* For 64bit */
The memory layout for 32bit and 64bit are exactly the same.
do you
need to differ here?
I think Ian has already replied. I will change the name of
4.2 New entries in xenstore for device BARs
toolkit also updates the xenstore information for the
(virtualbar:physical bar).
This information is read by xenpciback and returned to
driver configuration
space accesses.
Can you details what do you plan to put in xenstore and
It is implementation . But I plan to put under domU / device
Actually, xenstore is an API of sorts which needs to be
forward (since front and backend can evolve separately, so it
some level of design and documentation.

What about the expansion ROM?
Do you want to put some restriction on not using expansion
ROM as
passthrough device.
"expansion ROM as a passthrough device" doesn't make sense to
passthrough devices may _have_ an expansion ROM.

The expansion ROM is just another BAR. I don't know how
deal with those today on PV x86, but I see no reason for ARM to

4.3 Hypercall for bdf mapping notification to xen
#define PHYSDEVOP_map_sbdf              43
typedef struct {
         u32 s;
         u8 b;
         u8 df;
         u16 res;
} sbdf_t;
struct physdev_map_sbdf {
         int domain_id;
         sbdf_t    sbdf;
         sbdf_t    gsbdf;

Each domain has a pdev list, which contains the list of
pci devices.
pdev structure already has a sbdf information. The
arch_pci_dev is
updated to
contain the gsbdf information. (gs- guest segment id)

Whenever there is trap from guest or an interrupt has to
the pdev
list is iterated to find the gsbdf.
Can you give more background for this section? i.e:
        - Why do you need this?
        - How xen will translate the gbdf to a vDeviceID?
In the context of the hypercall processing.
        - Who will call this hypercall?
        - Why not setting the gsbdf when the device is
Can the maintainer of the pciback suggest an alternate.
That's not me, but I don't think this belongs here, I think it
done from the toolstack. If you think not then please explain
information the toolstack doesn't have in its possession which
this mapping from being done there.
The toolstack does not have the guest sbdf information. I could
find it in xenpciback.
Are you sure? The sbdf relates to the physical device, correct? If
then surely the toolstack knows it -- it's written in the config
and is the primary parameter to all of the related libxl
APIs. The toolstack wouldn't be able to do anything about passing
through a given device without knowing which device it should be

Perhaps this info needs plumbing through to some new bit of the
toolstack, but it is surely available somewhere.

If you meant the virtual SBDF then that is in
I added prints in libxl__device_pci_add. vdevfn is always 0 so this
not be the right variable to use.
Can you please recheck.

Also the vdev-X entry in xenstore appears to be created from pciback
code and not from xl.
Check function xen_pcibk_publish_pci_dev.

So I have to send a hypercall from pciback only.
I don't think the necessarily follows.

You could have the tools read the vdev-X node back on plug.
I have been trying to get the flow of caller of libxl__device_pci_add
during pci device assignemnt from cfg file(cold boot).
It should be called form xl create flow. Is it called from C code or
Python code.
There is no Python code which you need to worry about involved here. You
can completely ignore tools/python.

In the first instance you need only to worry about tools/libxl/libxl* (the
toolstack library). The xl commands are in tools/libxl/xl* and calls
libxl_domain_create_new with a libxl_domain_config struct which contains
the array of pci devices to cold plug.

Hotplug starts at libxl_device_pci_add.

Most of the code for the PCI specific bits are in tools/libxl/libxl_pci.c.

Secondly, the vdev-X entry is created async by dom0 watching on event.
So how the tools could read back and call assign device again.
Perhaps by using a xenstore watch on that node to wait for the assignment
from pciback to occur.
As per the flow in the do_pci_add function, assign_device is called first and based on the success xenstore entry is created.
Are you suggesting to change the sequence.
We can discuss this more on #xenarm irc
Or you could change things such that vdevfn is always chosen by the
toolstack for ARM, not optionally like it is on x86.
For this one, the struct libxl_device_pci has a field "vdevfn", which is
supposed to allow the user to specify a specific vdevfn. I'm not sure how
that happens or fits together but libxl could undertake to set that on ARM
in the case where the user hasn't done so, effectively taking control of
the PCI bus assignment.


Xen-devel mailing list

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.