[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 6/6] iommu/arm: Add Renesas IPMMU-VMSA support


  • To: Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Yoshihiro Shimoda <yoshihiro.shimoda.uh@xxxxxxxxxxx>
  • Date: Wed, 7 Aug 2019 02:41:06 +0000
  • Accept-language: ja-JP, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=renesas.com;dmarc=pass action=none header.from=renesas.com;dkim=pass header.d=renesas.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZVYBMbuQtxzRqk4wwCAWx5Yz4pcOyup1Wchjdb16UCg=; b=WlnT6Vx3JCJEpE9lfj1hI8ccE54UFoa08dOV5xm8IVvMgrKBU++vwwtNpu7fIgsYqcpipY2XAQH2Vse3oXRzzOC9pIuP/4dAXXEBceQK3uV35TateH07xFZNhFHzyBCgzoMag62WzcaVweEUupxB4KppLn5oGxaNKtFB5Z90/m8eRMisw+Zs2S2u+nFKqYjOPoz8lSzz3Cqpp1A5TZ6QgnugCLGw+y5DWekXRZSALVJzgWTOM+5H0Ex3TX3wGhSas7UQVvHy0JP7o40zmn/vLzD/GAjvM/Af7qNCUcqM19q25DZ9CeXQp6QJOZ496ODd1t55ymPPlvObCIIpGnG0Fg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JWJguT9fW3f1X+ckcG7z2V5nP6YvA5WY3ul0k4snXXRy3nblg6o2BRcoWLPIoTMRPbPXiF0Z1qU3SOS8aurezEjoBPf0GHhJpo90oGsXkuHhqRIr15DK5Dl9N8hLRU9ND6mMb+88fpys4THMdFaYV3wRauJxdm797Jb0gUCxckjuT9KChlfmsULXI1Nbg0wBNZzVStGMBBHY2MXZerrh8c/azUKsr3B5qz1OnfbLGdcPSzDSH8+ntlradEu1pO5qetEVE9Dfu275XFchvf9/ITp0ty18OJ+dmaEtNzMsolaiGjpOvzHCuFqbVYKixjhH4jHiDsuB53PHrAGAgTSzTg==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=yoshihiro.shimoda.uh@xxxxxxxxxxx;
  • Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, "julien.grall@xxxxxxx" <julien.grall@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>
  • Delivery-date: Wed, 07 Aug 2019 02:41:30 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVSVDyIlUrtkBxi0Kon+v5VwnGjKbu/2/g
  • Thread-topic: [PATCH V2 6/6] iommu/arm: Add Renesas IPMMU-VMSA support

Hi Oleksandr-san,

I can access the datasheet of this hardware, so that I reviewed this patch.
I'm not familar about Xen development rulus, so that some comments might
be not good fit. If so, please ignore :)

> From: Oleksandr Tyshchenko, Sent: Saturday, August 3, 2019 1:40 AM
> 
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
> 
> The IPMMU-VMSA is VMSA-compatible I/O Memory Management Unit (IOMMU)
> which provides address translation and access protection functionalities
> to processing units and interconnect networks.
> 
> Please note, current driver is supposed to work only with newest
> Gen3 SoCs revisions which IPMMU hardware supports stage 2 translation

This should be "R-Car Gen3 SoCs", instead of "Gen3 SoCs".

> table format and is able to use CPU's P2M table as is if one is
> 3-level page table (up to 40 bit IPA).
> 
> The major differences compare to the Linux driver are:
> 
> 1. Stage 1/Stage 2 translation. Linux driver supports Stage 1
> translation only (with Stage 1 translation table format). It manages
> page table by itself. But Xen driver supports Stage 2 translation
> (with Stage 2 translation table format) to be able to share the P2M
> with the CPU. Stage 1 translation is always bypassed in Xen driver.
> 
> So, Xen driver is supposed to be used with newest Gen3 SoC revisions only

Same here.

> (H3 ES3.0, M3 ES3.0, etc.) which IPMMU H/W supports stage 2 translation

According to the latest manual, M3 ES3.0 is named as "M3-W+".

> table format.

<snip>
> diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c 
> b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> new file mode 100644
> index 0000000..a34a8f8
> --- /dev/null
> +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> @@ -0,0 +1,1342 @@
> +/*
> + * xen/drivers/passthrough/arm/ipmmu-vmsa.c
> + *
> + * Driver for the Renesas IPMMU-VMSA found in R-Car Gen3 SoCs.
> + *
> + * The IPMMU-VMSA is VMSA-compatible I/O Memory Management Unit (IOMMU)
> + * which provides address translation and access protection functionalities
> + * to processing units and interconnect networks.
> + *
> + * Please note, current driver is supposed to work only with newest Gen3 SoCs
> + * revisions which IPMMU hardware supports stage 2 translation table format 
> and
> + * is able to use CPU's P2M table as is.
> + *
> + * Based on Linux's IPMMU-VMSA driver from Renesas BSP:
> + *    drivers/iommu/ipmmu-vmsa.c

So, I think the Linux's Copyrights should be described here.

> + * you can found at:
> + *    url: 
> git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas-bsp.git
> + *    branch: v4.14.75-ltsi/rcar-3.9.6
> + *    commit: e206eb5b81a60e64c35fbc3a999b1a0db2b98044
> + * and Xen's SMMU driver:
> + *    xen/drivers/passthrough/arm/smmu.c
> + *
> + * Copyright (C) 2016-2019 EPAM Systems Inc.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms and conditions of the GNU General Public
> + * License, version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public
> + * License along with this program; If not, see 
> <http://www.gnu.org/licenses/>.

I don't know that Xen license description rule, but since a few source files 
have
SPDX-License-Identifier, can we also use it on the driver?

> + */
> +
> +#include <xen/delay.h>
> +#include <xen/err.h>
> +#include <xen/irq.h>
> +#include <xen/lib.h>
> +#include <xen/list.h>

I don't know that Xen passthrough driver rule though, doesn't here need
#include <xen/iommu.h>? (The xen/sched.h seems to have it so that
no compile error happens though.)

<snip>
> +/* Registers Definition */
> +#define IM_CTX_SIZE    0x40
> +
> +#define IMCTR                0x0000
> +/*
> + * These fields are implemented in IPMMU-MM only. So, can be set for
> + * Root IPMMU only.
> + */
> +#define IMCTR_VA64           (1 << 29)
> +#define IMCTR_TRE            (1 << 17)
> +#define IMCTR_AFE            (1 << 16)
> +#define IMCTR_RTSEL_MASK     (3 << 4)
> +#define IMCTR_RTSEL_SHIFT    4
> +#define IMCTR_TREN           (1 << 3)
> +/*
> + * These fields are common for all IPMMU devices. So, can be set for
> + * Cache IPMMUs as well.
> + */
> +#define IMCTR_INTEN          (1 << 2)
> +#define IMCTR_FLUSH          (1 << 1)
> +#define IMCTR_MMUEN          (1 << 0)
> +#define IMCTR_COMMON_MASK    (7 << 0)
> +
> +#define IMCAAR               0x0004
> +
> +#define IMTTBCR                        0x0008
> +#define IMTTBCR_EAE                    (1 << 31)
> +#define IMTTBCR_PMB                    (1 << 30)
> +#define IMTTBCR_SH1_NON_SHAREABLE      (0 << 28)
> +#define IMTTBCR_SH1_OUTER_SHAREABLE    (2 << 28)
> +#define IMTTBCR_SH1_INNER_SHAREABLE    (3 << 28)
> +#define IMTTBCR_SH1_MASK               (3 << 28)
> +#define IMTTBCR_ORGN1_NC               (0 << 26)
> +#define IMTTBCR_ORGN1_WB_WA            (1 << 26)
> +#define IMTTBCR_ORGN1_WT               (2 << 26)
> +#define IMTTBCR_ORGN1_WB               (3 << 26)
> +#define IMTTBCR_ORGN1_MASK             (3 << 26)
> +#define IMTTBCR_IRGN1_NC               (0 << 24)
> +#define IMTTBCR_IRGN1_WB_WA            (1 << 24)
> +#define IMTTBCR_IRGN1_WT               (2 << 24)
> +#define IMTTBCR_IRGN1_WB               (3 << 24)
> +#define IMTTBCR_IRGN1_MASK             (3 << 24)
> +#define IMTTBCR_TSZ1_MASK              (1f << 16)

At the moment, no one uses it though, this should be (0x1f << 16).

<snip>
+/* Xen IOMMU ops */
> +static int __must_check ipmmu_iotlb_flush_all(struct domain *d)
> +{
> +    struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +
> +    if ( !xen_domain || !xen_domain->root_domain )
> +        return 0;
> +
> +    spin_lock(&xen_domain->lock);

Is local irq is already disabled here?
If no, you should use spin_lock_irqsave() because the ipmmu_irq() also
gets the lock.
# To be honest, in normal case, any irq on the current implementation
# should not happen though.

> +    ipmmu_tlb_invalidate(xen_domain->root_domain);
> +    spin_unlock(&xen_domain->lock);
> +
> +    return 0;
> +}
> +
<snip>
> +static int ipmmu_assign_device(struct domain *d, u8 devfn, struct device 
> *dev,
> +                               uint32_t flag)
> +{
> +    struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +    struct ipmmu_vmsa_domain *domain;
> +    int ret;
> +
> +    if ( !xen_domain )
> +        return -EINVAL;
> +
> +    if ( !to_ipmmu(dev) )
> +        return -ENODEV;
> +
> +    spin_lock(&xen_domain->lock);

Same here.

> +    /*
> +     * The IPMMU context for the Xen domain is not allocated beforehand
> +     * (at the Xen domain creation time), but on demand only, when the first
> +     * master device being attached to it.
> +     * Create Root IPMMU domain which context will be mapped to this Xen 
> domain
> +     * if not exits yet.
> +     */
> +    if ( !xen_domain->root_domain )
> +    {
> +        domain = ipmmu_alloc_root_domain(d);
> +        if ( IS_ERR(domain) )
> +        {
> +            ret = PTR_ERR(domain);
> +            goto out;
> +        }
> +
> +        xen_domain->root_domain = domain;
> +    }
> +
> +    if ( to_domain(dev) )
> +    {
> +        dev_err(dev, "Already attached to IPMMU domain\n");
> +        ret = -EEXIST;
> +        goto out;
> +    }
> +
> +    /*
> +     * Master devices behind the same Cache IPMMU can be attached to the same
> +     * Cache IPMMU domain.
> +     * Before creating new IPMMU domain check to see if the required one
> +     * already exists for this Xen domain.
> +     */
> +    domain = ipmmu_get_cache_domain(d, dev);
> +    if ( !domain )
> +    {
> +        /* Create new IPMMU domain this master device will be attached to. */
> +        domain = ipmmu_alloc_cache_domain(d);
> +        if ( IS_ERR(domain) )
> +        {
> +            ret = PTR_ERR(domain);
> +            goto out;
> +        }
> +
> +        /* Chain new IPMMU domain to the Xen domain. */
> +        list_add(&domain->list, &xen_domain->cache_domains);
> +    }
> +
> +    ret = ipmmu_attach_device(domain, dev);
> +    if ( ret )
> +    {
> +        /*
> +         * Destroy Cache IPMMU domain only if there are no master devices
> +         * attached to it.
> +         */
> +        if ( !domain->refcount )
> +            ipmmu_free_cache_domain(domain);
> +    }
> +    else
> +    {
> +        domain->refcount++;
> +        set_domain(dev, domain);
> +    }
> +
> +out:
> +    spin_unlock(&xen_domain->lock);
> +
> +    return ret;
> +}
> +
> +static int ipmmu_deassign_device(struct domain *d, struct device *dev)
> +{
> +    struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +    struct ipmmu_vmsa_domain *domain = to_domain(dev);
> +
> +    if ( !domain || domain->d != d )
> +    {
> +        dev_err(dev, "Not attached to %pd\n", d);
> +        return -ESRCH;
> +    }
> +
> +    spin_lock(&xen_domain->lock);

Same here.

> +    ipmmu_detach_device(domain, dev);
> +    set_domain(dev, NULL);
> +    domain->refcount--;
> +
> +    /*
> +     * Destroy Cache IPMMU domain only if there are no master devices
> +     * attached to it.
> +     */
> +    if ( !domain->refcount )
> +        ipmmu_free_cache_domain(domain);
> +
> +    spin_unlock(&xen_domain->lock);
> +
> +    return 0;
> +}
<snip>
> +static void __hwdom_init ipmmu_iommu_hwdom_init(struct domain *d)
> +{
> +    /* Set to false options not supported on ARM. */
> +    if ( iommu_hwdom_inclusive )
> +        printk(XENLOG_WARNING "ipmmu: map-inclusive dom0-iommu option is not 
> supported on ARM\n");
> +    iommu_hwdom_inclusive = false;
> +    if ( iommu_hwdom_reserved == 1 )
> +        printk(XENLOG_WARNING "ipmmu: map-reserved dom0-iommu option is not 
> supported on ARM\n");
> +    iommu_hwdom_reserved = 0;
> +
> +    arch_iommu_hwdom_init(d);
> +}
> +
> +static void ipmmu_iommu_domain_teardown(struct domain *d)
> +{
> +    struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> +
> +    if ( !xen_domain )
> +        return;
> +
> +    spin_lock(&xen_domain->lock);

Same here.

> +    /*
> +     * Destroy Root IPMMU domain which context is mapped to this Xen domain
> +     * if exits.
> +     */
> +    if ( xen_domain->root_domain )
> +        ipmmu_free_root_domain(xen_domain->root_domain);
> +
> +    spin_unlock(&xen_domain->lock);
> +
> +    /*
> +     * We assume that all master devices have already been detached from
> +     * this Xen domain and there must be no associated Cache IPMMU domains
> +     * in use.
> +     */
> +    ASSERT(list_empty(&xen_domain->cache_domains));

I think this should be in the spin lock held by &xen_domain->lock.

> +    xfree(xen_domain);
> +    dom_iommu(d)->arch.priv = NULL;
> +}
> +
> +static const struct iommu_ops ipmmu_iommu_ops =
> +{
> +    .init            = ipmmu_iommu_domain_init,
> +    .hwdom_init      = ipmmu_iommu_hwdom_init,
> +    .teardown        = ipmmu_iommu_domain_teardown,
> +    .iotlb_flush     = ipmmu_iotlb_flush,
> +    .iotlb_flush_all = ipmmu_iotlb_flush_all,
> +    .assign_device   = ipmmu_assign_device,
> +    .reassign_device = ipmmu_reassign_device,
> +    .map_page        = arm_iommu_map_page,
> +    .unmap_page      = arm_iommu_unmap_page,
> +    .add_device      = ipmmu_add_device,
> +};
> +
> +/* RCAR GEN3 product and cut information. */

"R-Car Gen3 SoCs" is better than "RCAR GEN3".

> +#define RCAR_PRODUCT_MASK    0x00007F00
> +#define RCAR_PRODUCT_H3      0x00004F00
> +#define RCAR_PRODUCT_M3      0x00005200

At least, I think we should be M3W, instead of M3.
# FYI, M3-W and M3-W+ are the same value.

<snip>

Best regards,
Yoshihiro Shimoda


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.