[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 07/12] hvmloader: allocate MMCONFIG area in the MMIO hole + minor code refactoring



On Thu, 22 Mar 2018 12:44:02 +0000
Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:

>On Thu, Mar 22, 2018 at 10:29:22PM +1000, Alexey G wrote:
>> On Thu, 22 Mar 2018 09:57:16 +0000
>> Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
>> [...]  
>> >> Yes, and it is still needed as we have two distinct (and not
>> >> equal) interfaces to PCI conf space. Apart from 0..FFh range
>> >> overlapping they can be considered very different interfaces. And
>> >> whether it is a real system or emulated -- we can use either one
>> >> of these two interfaces or both.    
>> >
>> >The legacy PCI config space accesses and the MCFG config space
>> >access are just different methods of accessing the PCI
>> >configuration space, but the data _must_ be exactly the same. I
>> >don't see how a device would care about where the access to the
>> >config space originated.  
>> 
>> If they were different methods of accessing the same thing, they
>> could've been used interchangeably. When we've got a PCI conf ioreq
>> which has offset>100h we know we cannot just pass it to emulated
>> CF8/CFC but have to emulate this specifically.  
>
>This is already not the best approach to dispatch PCI config space
>access in QEMU. I think the interface in QEMU should be:
>
>pci_conf_space_{read/write}(sbdf, register, size , data)
>
>And this would go directly into the device. But I assume this involves
>a non-trivial amount of work to be implemented. Hence xen-hvm.c usage
>of the IO port access replay.

Yes, it's a helpful shortcut. The only bad thing that we can't use
it for PCI extended config accesses, a memory address within emulated
MMCONFIG much more preferable in current architecture.

>> >OK, so you don't want to reconstruct the access, fine.
>> >
>> >Then just inject it using pcie_mmcfg_data_{read/write} or some
>> >similar wrapper. My suggestion was just to try to use the easier
>> >way to get this injected into QEMU.  
>> 
>> QEMU knows its position, the problem it that xen-hvm.c (ioreq
>> processor) is rather isolated from MMCONFIG emulation.
>> 
>> If you check the pcie_mmcfg_data_read/write MMCONFIG handlers in
>> QEMU, you can see this:
>> 
>> static uint64_t pcie_mmcfg_data_read(void *opaque, <...>
>> {
>>     PCIExpressHost *e = opaque;
>> ...
>> 
>> We know this 'opaque' when we do MMIO-style MMCONFIG handling as
>> pcie_mmcfg_data_read/write are actual handlers.
>> 
>> But xen-hvm.c needs to gain access to PCIExpressHost out of nowhere,
>> which is possible but considered a hack by QEMU. We can also insert
>> some code to MMCONFIG emulation which will store info we need to some
>> global variables to be used across wildly different and unrelated
>> modules. It will work, but anyone who see it will have bad thoughts
>> on his mind.  
>
>Since you need to notify Xen the MCFG area address, why not just store
>the MCFG address while doing this operation? You could do this with a
>helper in xen-hvm.c, and keep the variable locally to that file.
>
>In any case, this is a QEMU implementation detail. IMO the IOREQ
>interface is clear and should not be bended like this just because
>'this is easier to implement in QEMU'.

A bit of hack too, but might work. Anyway, it's an extra work we can
avoid if we simply skip PCI conf translation for MMCONFIG MMIO ioreqs
targeting QEMU. I completely agree that we need to translate these
accesses into PCI conf ioreqs for device DMs, but for QEMU it is an
unwanted and redundant step.

AFAIK (Paul might correct me here) the multiple device emulators
feature already makes use of the primary (aka default) DM and
device-specific DM distinction, so in theory it should be possible to
provide that translation only for device-specific DMs (which function
apart from the emulated machine and cannot use its facilities).

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.