[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap accesses to the PCI config space



> -----Original Message-----
> From: Roger Pau Monne
> Sent: 24 April 2017 11:12
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; konrad.wilk@xxxxxxxxxx;
> boris.ostrovsky@xxxxxxxxxx; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>
> Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> accesses to the PCI config space
> 
> On Mon, Apr 24, 2017 at 10:58:04AM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne
> > > Sent: 24 April 2017 10:42
> > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; konrad.wilk@xxxxxxxxxx;
> > > boris.ostrovsky@xxxxxxxxxx; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Wei
> Liu
> > > <wei.liu2@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Andrew
> Cooper
> > > <Andrew.Cooper3@xxxxxxxxxx>
> > > Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> > > accesses to the PCI config space
> > >
> > > On Fri, Apr 21, 2017 at 05:23:34PM +0100, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Roger Pau Monne [mailto:roger.pau@xxxxxxxxxx]
> > > [...]
> > > > > +int xen_vpci_read(unsigned int seg, unsigned int bus, unsigned int
> > > devfn,
> > > > > +                  unsigned int reg, uint32_t size, uint32_t *data)
> > > > > +{
> > > > > +    struct domain *d = current->domain;
> > > > > +    struct pci_dev *pdev;
> > > > > +    const struct vpci_register *r;
> > > > > +    union vpci_val val = { .double_word = 0 };
> > > > > +    unsigned int data_rshift = 0, data_lshift = 0, data_size;
> > > > > +    uint32_t tmp_data;
> > > > > +    int rc;
> > > > > +
> > > > > +    ASSERT(vpci_locked(d));
> > > > > +
> > > > > +    *data = 0;
> > > > > +
> > > > > +    /* Find the PCI dev matching the address. */
> > > > > +    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
> > > > > +    if ( !pdev )
> > > > > +        goto passthrough;
> > > >
> > > > I hope this can eventually be generalised so I wonder what your
> intention is
> > > regarding co-existence between Xen emulated PCI config space, pass-
> > > through and PCI devices emulated externally. We already have a
> framework
> > > for registering PCI devices by SBDF but this code seems to make no use of
> it,
> > > which I suspect is likely to cause future conflict.
> > >
> > > Yes, the long term aim is to use this code in order to implement
> > > PCI-passthrough for PVH and HVM DomUs also.
> > >
> > > TBH, I didn't know we already had such code (I assume you mean the
> IOREQ
> > > related PCI code). As it is, I see a couple of issues with that, the 
> > > first one
> > > is that this code expects a ioreq client on the other end, and the code 
> > > I'm
> > > adding here is all inside of the hypervisor. The second issue is that the
> IOREQ
> > > code ATM only allows for local PCI accesses, which means I should extend
> it
> > > to
> > > also deal with ECAM/MMCFG areas.
> > >
> > > I completely agree that at some point this should be made to work
> together,
> > > but
> > > I'm not sure if it would be better to do that once we want to also use 
> > > vPCI
> for
> > > DomUs, so that the Dom0 side is not delayed further.
> >
> > BTW, that's also an argument for forgetting about the r-b scheme for
> handler registration since, if this really is for dom0 only, 8 pages worth of
> direct map is not a lot.
> 
> It's 8 pages for each device, not 8 pages for each domain, so it doesn't 
> matter
> if it's Dom0 or DomU, each PCIe device would use 8 pages.

Sorry, yes of course it is.

  Paul

> 
> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.