[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers



On Thu, Sep 26, 2019 at 03:17:15PM +0200, Jan Beulich wrote:
> On 26.09.2019 13:14, Roger Pau Monné  wrote:
> > On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> >> Having said this, as a result of having looked at some of the
> >> involved code, and with the cover letter not clarifying this,
> >> what's the reason for going this seemingly more complicated
> >> route, rather than putting vPCI behind the hvm_io_intercept()
> >> machinery, just like is the case for other internal handling?
> > 
> > If vPCI is handled at the hvm_io_intercept level (like its done ATM)
> > then it's not possible to have both (external) ioreq servers and vPCI
> > handling accesses to different devices in the PCI config space, since
> > vPCI would trap all accesses to the PCI IO ports and the MCFG regions
> > and those would never reach the ioreq processing.
> 
> Why would vPCI (want to) do that? The accept() handler should
> sub-class the CF8-CFF port range; there would likely want to
> be another struct hvm_io_ops instance dealing with config
> space accesses (and perhaps with ones through port I/O and
> through MCFG at the same time).

Do you mean to expand hvm_io_handler to add something like a pciconf
sub-structure to the existing union of portio and mmio?

That's indeed feasible, but I'm not sure why it's better that the
approach proposed on this series. Long term I think we would like all
intercept handlers to use the ioreq infrastructure and remove the
usage of hvm_io_intercept.

> In the end this would likely
> more similar to how chipsets handle this on real hardware
> than your "internal server" solution (albeit I agree to a
> degree it's in implementation detail anyway).

I think the end goal should be to unify the internal and external
intercepts into a single point, and the only feasible way to do this
is to switch the internal intercepts to use the ioreq infrastructure.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.