[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers



On Thu, Sep 26, 2019 at 05:13:23PM +0200, Jan Beulich wrote:
> On 26.09.2019 15:46, Roger Pau Monné  wrote:
> > On Thu, Sep 26, 2019 at 03:17:15PM +0200, Jan Beulich wrote:
> >> On 26.09.2019 13:14, Roger Pau Monné  wrote:
> >>> On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> >>>> Having said this, as a result of having looked at some of the
> >>>> involved code, and with the cover letter not clarifying this,
> >>>> what's the reason for going this seemingly more complicated
> >>>> route, rather than putting vPCI behind the hvm_io_intercept()
> >>>> machinery, just like is the case for other internal handling?
> >>>
> >>> If vPCI is handled at the hvm_io_intercept level (like its done ATM)
> >>> then it's not possible to have both (external) ioreq servers and vPCI
> >>> handling accesses to different devices in the PCI config space, since
> >>> vPCI would trap all accesses to the PCI IO ports and the MCFG regions
> >>> and those would never reach the ioreq processing.
> >>
> >> Why would vPCI (want to) do that? The accept() handler should
> >> sub-class the CF8-CFF port range; there would likely want to
> >> be another struct hvm_io_ops instance dealing with config
> >> space accesses (and perhaps with ones through port I/O and
> >> through MCFG at the same time).
> > 
> > Do you mean to expand hvm_io_handler to add something like a pciconf
> > sub-structure to the existing union of portio and mmio?
> 
> Yes, something along these lines.
> 
> > That's indeed feasible, but I'm not sure why it's better that the
> > approach proposed on this series. Long term I think we would like all
> > intercept handlers to use the ioreq infrastructure and remove the
> > usage of hvm_io_intercept.
> > 
> >> In the end this would likely
> >> more similar to how chipsets handle this on real hardware
> >> than your "internal server" solution (albeit I agree to a
> >> degree it's in implementation detail anyway).
> > 
> > I think the end goal should be to unify the internal and external
> > intercepts into a single point, and the only feasible way to do this
> > is to switch the internal intercepts to use the ioreq infrastructure.
> 
> Well, I recall this having been mentioned as an option; I don't
> recall this being a firm plan. There are certainly benefits to
> such a model, but there's also potentially more overhead (at the
> very least the ioreq_t will then need setting up / maintaining
> everywhere, when right now the interfaces are using more
> immediate parameters).

AFAICT from code in hvmemul_do_io which dispatches to both
hvm_io_intercept and ioreq servers the ioreq is already there, so I'm
not sure why more setup would be required in order to handle internal
intercepts as ioreq servers. For vPCI at least I've been able to get
away without having to modify hvmemul_do_io IIRC.

> But yes, if this _is_ the plan, then going that route right away
> for vPCI is desirable.

I think it would be desirable to have a single point where intercepts
are handled instead of having such different implementations for
internal vs external, and the only way I can devise to achieve this is
by moving intercepts to the ioreq model.

I'm not certainly planning to move all intercepts right now, but I
think it's a good first step having the code in place to allow this,
and at least vPCI using it.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.