[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 10/11] xen/arm: Do not map PCI ECAM space to Domain-0's p2m


  • To: Julien Grall <julien.grall.oss@xxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Mon, 13 Sep 2021 06:27:18 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=sIvjLVGHt/eEiAna3F5S2ufDXZC8QJ9KZpHECwaw8W8=; b=VfhVJ8dQGsAa9gB9XeI+t9u8ln+OWc6iPJl4WRkVCdDzOffq8HKMg9vCbxIGA3CVXcUHjCgJ5lIOC3t2ekoOYWnqAMyaGsz4GUx0E77dE5NCTNEQwl9abTfRIZ+KoRpZEBtMWJQAQ5f6SPjtdAy1vR9jdfli1BsGg/B/QDlOg+ijM2NWUU84UuE+BmBj1isuvVr3gzsggDlAOFFeEnjOHS1pt+dLuY55jgfk3A2WgTTOeD1Cg/pB4DpRCDHpnoFRtxWTmGVplbjfdsvNQTljoaYkz75IyTIoWxs8PFNcM9QPeGwsuY84dJprwDZVpV5NZ1KyYyPHwXOhxFovz4sxEw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LRGHGXdl7NFOTWcEqeXEpZ4NG/HO+OZbRUXXV9hy6BzNYGt9/xkn6ELTvkOc5QafRjqLe6x3Icg7FqnL+klItZyZFRSbxP634eF4SGovI98BSI1FMaFMpybvfsfW5ZiMrFTVlKb+cnCOhsNswl8TRitsXQysZ4Mg4xpLIE0bHIIyhT0GCu3tPsI1qt3ulB6y1nLpWo63ljORtcuePG93L9cUUzhYOFiqrXwgP15r3oT4xH7fO30cSZokoAH4RpjyGPM1lDNTfK2+OaqOKp7G+VAdx92I6Qdllun+pClWxN8seviPgng0dmX6C6sd/vs/OCCyTRJFC5Vfe102mjpUvg==
  • Authentication-results: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=epam.com;
  • Cc: Oleksandr Andrushchenko <andr2000@xxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Mon, 13 Sep 2021 06:27:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXoJ50hf+zXQhc2ka7YfmmERKX+aucB06AgAE4wYCAAAtRgIAADDiAgAARZwCAAFsJAIAAFA4AgAO3eIA=
  • Thread-topic: [PATCH 10/11] xen/arm: Do not map PCI ECAM space to Domain-0's p2m

On 11.09.21 00:41, Julien Grall wrote:
>
>
> On Fri, 10 Sep 2021, 21:30 Stefano Stabellini, <sstabellini@xxxxxxxxxx 
> <mailto:sstabellini@xxxxxxxxxx>> wrote:
>
>     On Fri, 10 Sep 2021, Julien Grall wrote:
>     > On 10/09/2021 15:01, Oleksandr Andrushchenko wrote:
>     > > On 10.09.21 16:18, Julien Grall wrote:
>     > > > On 10/09/2021 13:37, Oleksandr Andrushchenko wrote:
>     > > > > Hi, Julien!
>     > > >
>     > > > Hi Oleksandr,
>     > > >
>     > > > > On 09.09.21 20:58, Julien Grall wrote:
>     > > > > > On 03/09/2021 09:33, Oleksandr Andrushchenko wrote:
>     > > > > > > From: Oleksandr Andrushchenko 
> <oleksandr_andrushchenko@xxxxxxxx <mailto:oleksandr_andrushchenko@xxxxxxxx>>
>     > > > > > >
>     > > > > > > Host bridge controller's ECAM space is mapped into Domain-0's 
> p2m,
>     > > > > > > thus it is not possible to trap the same for vPCI via MMIO 
> handlers.
>     > > > > > > For this to work we need to not map those while constructing 
> the
>     > > > > > > domain.
>     > > > > > >
>     > > > > > > Note, that during Domain-0 creation there is no pci_dev yet
>     > > > > > > allocated for
>     > > > > > > host bridges, thus we cannot match PCI host and its associated
>     > > > > > > bridge by SBDF. Use dt_device_node field for checks instead.
>     > > > > > >
>     > > > > > > Signed-off-by: Oleksandr Andrushchenko
>     > > > > > > <oleksandr_andrushchenko@xxxxxxxx 
> <mailto:oleksandr_andrushchenko@xxxxxxxx>>
>     > > > > > > ---
>     > > > > > > xen/arch/arm/domain_build.c        |  3 +++
>     > > > > > > xen/arch/arm/pci/ecam.c            | 17 +++++++++++++++++
>     > > > > > > xen/arch/arm/pci/pci-host-common.c | 22 ++++++++++++++++++++++
>     > > > > > > xen/include/asm-arm/pci.h          | 12 ++++++++++++
>     > > > > > >     4 files changed, 54 insertions(+)
>     > > > > > >
>     > > > > > > diff --git a/xen/arch/arm/domain_build.c
>     > > > > > > b/xen/arch/arm/domain_build.c
>     > > > > > > index da427f399711..76f5b513280c 100644
>     > > > > > > --- a/xen/arch/arm/domain_build.c
>     > > > > > > +++ b/xen/arch/arm/domain_build.c
>     > > > > > > @@ -1257,6 +1257,9 @@ static int __init 
> map_range_to_domain(const
>     > > > > > > struct dt_device_node *dev,
>     > > > > > >             }
>     > > > > > >         }
>     > > > > > >     +    if ( need_mapping && (device_get_class(dev) == 
> DEVICE_PCI)
>     > > > > > > ) > +        need_mapping = 
> pci_host_bridge_need_p2m_mapping(d, dev,
>     > > > > > addr, len);
>     > > > > >
>     > > > > > AFAICT, with device_get_class(dev), you know whether the 
> hostbridge is
>     > > > > > used by Xen. Therefore, I would expect that we don't want to 
> map all
>     > > > > > the regions of the hostbridges in dom0 (including the BARs).
>     > > > > >
>     > > > > > Can you clarify it?
>     > > > > We only want to trap ECAM, not MMIOs and any other memory regions 
> as the
>     > > > > bridge is
>     > > > >
>     > > > > initialized and used by Domain-0 completely.
>     > > >
>     > > > What do you mean by "used by Domain-0 completely"? The hostbridge 
> is owned
>     > > > by Xen so I don't think we can let dom0 access any MMIO regions by
>     > > > default.
>     > >
>     > > Now it's my time to ask why do you think Xen owns the bridge?
>     > >
>     > > All the bridges are passed to Domain-0, are fully visible to Domain-0,
>     > > initialized in Domain-0.
>     > >
>     > > Enumeration etc. is done in Domain-0. So how comes that Xen owns the 
> bridge?
>     > > In what way it does?
>     > >
>     > > Xen just accesses the ECAM when it needs it, but that's it. Xen traps 
> ECAM
>     > > access - that is true.
>     > >
>     > > But it in no way uses MMIOs etc. of the bridge - those under direct 
> control
>     > > of Domain-0
>     >
>     > So I looked on the snipped of the design document you posted. I think 
> you are
>     > instead referring to a different part:
>     >
>     > " PCI-PCIe enumeration is a process of detecting devices connected to 
> its
>     > host.
>     > It is the responsibility of the hardware domain or boot firmware to do 
> the PCI
>     > enumeration and configure the BAR, PCI capabilities, and MSI/MSI-X
>     > configuration."
>     >
>     > But I still don't see why it means we have to map the MMIOs right now. 
> Dom0
>     > should not need to access the MMIOs (aside the hostbridge registers) 
> until the
>     > BARs are configured.
>
>     This is true especially when we are going to assign a specific PCIe
>     device to a DomU. In that case, the related MMIO regions are going to be
>     mapped to the DomU and it would be a bad idea to also keep them mapped
>     in Dom0. Once we do PCIe assignment, the MMIO region of the PCIe device
>     being assigned should only be mapped in the DomU, right?
>
>
> That's actually a good point. This is a recipe for disaster because dom0 and 
> the domU may map the BARs with conflicting caching attributes.
>
> So we ought to unmap the BAR from dom0. When the device is assigned to the 
> domU
1. Yes, currently we map MMIOs to Dom0 from the beginning (the whole aperture 
actually)

2. When a PCIe device assigned to a DomU we do unmap the relevant MMIOs

from Dom0 and map them to DomU


>
>     If so, it would be better if the MMIO region in question was never
>     mapped into Dom0 at all rather having to unmap it.
>
Ok, I'll do that
>
>
>     With the current approach, given that the entire PCIe aperture is mapped
>     to Dom0 since boot, we would have to identify the relevant subset region
>     and unmap it from Dom0 when doing assignment.
>
It is already in vPCI code (with non-identity mappings in the PCI devices 
passthrough on Arm, part 3)

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.