[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 07/12] hvmloader: allocate MMCONFIG area in the MMIO hole + minor code refactoring



On Thu, Mar 22, 2018 at 02:56:56AM +1000, Alexey G wrote:
> On Wed, 21 Mar 2018 15:20:17 +0000
> Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> 
> >On Thu, Mar 22, 2018 at 12:25:40AM +1000, Alexey G wrote:
> >> 8. As these MMCONFIG PCI conf reads occur out of context (just
> >> address/len/data without any emulated device attached to it),
> >> xen-hvm.c should employ special logic to make it QEMU-friendly --
> >> eg. right now it sends received PCI conf access into (emulated by
> >> QEMU) CF8h/CFCh ports.
> >> There is a real problem to embed these "naked" accesses into QEMU
> >> infrastructure, workarounds are required. BTW, find_primary_bus() was
> >> dropped from QEMU code -- it could've been useful here. Let's assume
> >> some workaround is employed (like storing a required object pointers
> >> in global variables for later use in xen-hvm.c)  
> >
> >That seems like a minor nit, but why not just use
> >address_space_{read/write} to replay the MCFG accesses as memory
> >read/writes?
> 
> Well, this might work actually. Although the overall scenario will be
> overcomplicated a bit for _PCI_CONFIG ioreqs. Here is how it will look:
> 
> QEMU receives PCIEXBAR update -> calls the new dmop to tell Xen new
> MMCONFIG address/size -> Xen (re)maps MMIO trapping area -> someone is
> accessing this area -> Xen intercepts this MMIO access
> 
> But here's what happens next:
> 
> Xen translates MMIO access into PCI_CONFIG and sends it to DM ->
> DM receives _PCI_CONFIG ioreq -> DM translates BDF/addr info back to
> the offset in emulated MMCONFIG range -> DM calls
> address_space_read/write to trigger MMIO emulation
> 
> I tnink some parts of this equation can be collapsed, isn't it?
> 
> Above scenario makes it obvious that at least for QEMU the MMIO->PCI
> conf translation is a redundant step. Why not to allow specifying for DM
> whether it prefers to receive MMCONFIG accesses as native (MMIO ones)
> or as translated PCI conf ioreqs?

You are just adding an extra level of complexity to an interface
that's fairly simple. You register a PCI device using
XEN_DMOP_IO_RANGE_PCI and you get IOREQ_TYPE_PCI_CONFIG ioreqs.
Getting both IOREQ_TYPE_PCI_CONFIG and IOREQ_TYPE_COPY for PCI config
space access is misleading.

In both cases Xen would have to do the MCFG access decoding in order
to figure out which IOREQ server will handle the request. At which
point the only step that you avoid is the reconstruction of the memory
access from the IOREQ_TYPE_PCI_CONFIG which is trivial.

> We can still route either ioreq
> type to multiple device emulators accordingly.

It's exactly the same that's done for IO space PCI config space
addresses. QEMU gets an IOREQ_TYPE_PCI_CONFIG and it replays the IO
space access using do_outp and cpu_ioreq_pio.

If you think using IOREQ_TYPE_COPY for MCFG accesses is such a benefit
for QEMU, why not just translate the IOREQ_TYPE_PCI_CONFIG into
IOREQ_TYPE_COPY in handle_ioreq and dispatch it using
cpu_ioreq_move?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.