[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers


  • To: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Fri, 27 Sep 2019 08:17:22 +0000
  • Accept-language: en-GB, en-US
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=Paul.Durrant@xxxxxxxxxx; spf=Pass smtp.mailfrom=Paul.Durrant@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 27 Sep 2019 08:17:45 +0000
  • Ironport-sdr: tLF0ZabUualXnqT3lHRru+IZSR+8MH9nt+2H/zrCu5jjjFdlePV2+AuPRApgn5BX9FYnnRVekk 80ArTIs0D1OJyR84Pe7pikPfIbnhvBGYai5wI2zbcQHy35HP3VvUCwV1Ogi1Nq2XdJo7xS2Mum ySXNvIozvFF0xRolVQeE8iahdXNYP8kWPQRVfKWIchcTaO8hxS05hbiK4CQMRzT79ykwfo+urc 0NyY4IdKV/BxtS0dLAXRiBtW3h0wWokZTyEHybQrGdW09qm5oG3bkuZfdyC8yFau2irC+rSvFI sbo=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVYnK3hVd65FGo0kCx8kLiNiHgV6c0aIKAgAloFACAACJrgIAACDwAgAAYNoCAAAy/AIABMP1Q
  • Thread-topic: [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal servers

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Sent: 26 September 2019 16:59
> To: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant 
> <Paul.Durrant@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx; Wei Liu <wl@xxxxxxx>
> Subject: Re: [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal 
> servers
> 
> On Thu, Sep 26, 2019 at 05:13:23PM +0200, Jan Beulich wrote:
> > On 26.09.2019 15:46, Roger Pau Monné  wrote:
> > > On Thu, Sep 26, 2019 at 03:17:15PM +0200, Jan Beulich wrote:
> > >> On 26.09.2019 13:14, Roger Pau Monné  wrote:
> > >>> On Fri, Sep 20, 2019 at 01:35:13PM +0200, Jan Beulich wrote:
> > >>>> Having said this, as a result of having looked at some of the
> > >>>> involved code, and with the cover letter not clarifying this,
> > >>>> what's the reason for going this seemingly more complicated
> > >>>> route, rather than putting vPCI behind the hvm_io_intercept()
> > >>>> machinery, just like is the case for other internal handling?
> > >>>
> > >>> If vPCI is handled at the hvm_io_intercept level (like its done ATM)
> > >>> then it's not possible to have both (external) ioreq servers and vPCI
> > >>> handling accesses to different devices in the PCI config space, since
> > >>> vPCI would trap all accesses to the PCI IO ports and the MCFG regions
> > >>> and those would never reach the ioreq processing.
> > >>
> > >> Why would vPCI (want to) do that? The accept() handler should
> > >> sub-class the CF8-CFF port range; there would likely want to
> > >> be another struct hvm_io_ops instance dealing with config
> > >> space accesses (and perhaps with ones through port I/O and
> > >> through MCFG at the same time).
> > >
> > > Do you mean to expand hvm_io_handler to add something like a pciconf
> > > sub-structure to the existing union of portio and mmio?
> >
> > Yes, something along these lines.
> >
> > > That's indeed feasible, but I'm not sure why it's better that the
> > > approach proposed on this series. Long term I think we would like all
> > > intercept handlers to use the ioreq infrastructure and remove the
> > > usage of hvm_io_intercept.
> > >
> > >> In the end this would likely
> > >> more similar to how chipsets handle this on real hardware
> > >> than your "internal server" solution (albeit I agree to a
> > >> degree it's in implementation detail anyway).
> > >
> > > I think the end goal should be to unify the internal and external
> > > intercepts into a single point, and the only feasible way to do this
> > > is to switch the internal intercepts to use the ioreq infrastructure.
> >
> > Well, I recall this having been mentioned as an option; I don't
> > recall this being a firm plan. There are certainly benefits to
> > such a model, but there's also potentially more overhead (at the
> > very least the ioreq_t will then need setting up / maintaining
> > everywhere, when right now the interfaces are using more
> > immediate parameters).
> 
> AFAICT from code in hvmemul_do_io which dispatches to both
> hvm_io_intercept and ioreq servers the ioreq is already there, so I'm
> not sure why more setup would be required in order to handle internal
> intercepts as ioreq servers. For vPCI at least I've been able to get
> away without having to modify hvmemul_do_io IIRC.
> 
> > But yes, if this _is_ the plan, then going that route right away
> > for vPCI is desirable.
> 
> I think it would be desirable to have a single point where intercepts
> are handled instead of having such different implementations for
> internal vs external, and the only way I can devise to achieve this is
> by moving intercepts to the ioreq model.
> 

+1 for the plan from me... doing this has been on my own to-do list for a while.

The lack of range-based registration for internal emulators is at least one 
thing that will be addressed by going this route, and I'd also expect some 
degree of simplification to the code by unifying things, once all the emulation 
is ported over.

> I'm not certainly planning to move all intercepts right now, but I
> think it's a good first step having the code in place to allow this,
> and at least vPCI using it.
> 

I think it's fine to do things piecemeal but all the internal emulators do need 
to be ported over a.s.a.p. I can certainly try to help with this once the 
groundwork is done.

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.