[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC v2 7/7] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver
On 10/12/2017 10:36 AM, Julien Grall wrote: > Hi Sameer, > > Given this is all Arm specific. I am not sure why people like Andrew, Jan > have been added. > > Please use scripts/get_maintainers to find the list of maintainers per > patches and avoid to CC all of them on each patches. Ok. > > On 21/09/17 01:37, Sameer Goel wrote: >> This driver follows an approach similar to smmu driver. The intent here >> is to reuse as much Linux code as possible. >> - Glue code has been introduced to bridge the API calls. >> - Called Linux functions from the Xen IOMMU function calls. >> - Xen modifications are preceded by /*Xen: comment */ >> >> Signed-off-by: Sameer Goel <sgoel@xxxxxxxxxxxxxx> >> --- >> xen/drivers/passthrough/arm/Makefile | 1 + >> xen/drivers/passthrough/arm/smmu-v3.c | 853 >> +++++++++++++++++++++++++++++----- > > This is based on an old SMMUv3 version and I have been told there are some > changes may benefits Xen (such as increasing the timeout for sync) and some > optimisations also exist on the ML and will be queued soon. > > So maybe you want to re-sync at least to master. > >> 2 files changed, 738 insertions(+), 116 deletions(-) >> >> diff --git a/xen/drivers/passthrough/arm/Makefile >> b/xen/drivers/passthrough/arm/Makefile >> index f4cd26e..57a6da6 100644 >> --- a/xen/drivers/passthrough/arm/Makefile >> +++ b/xen/drivers/passthrough/arm/Makefile >> @@ -1,2 +1,3 @@ >> obj-y += iommu.o >> obj-y += smmu.o >> +obj-y += smmu-v3.o > > Do we want SMMUv3 to be built on Arm32? Maybe we should introduce a new > Kconfig to let the user select. Agreed. I will define a CONFIG_ARM_SMMU_V3. > >> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c >> b/xen/drivers/passthrough/arm/smmu-v3.c >> index 380969a..8f3b43d 100644 >> --- a/xen/drivers/passthrough/arm/smmu-v3.c >> +++ b/xen/drivers/passthrough/arm/smmu-v3.c >> @@ -18,28 +18,266 @@ >> * Author: Will Deacon <will.deacon@xxxxxxx> >> * >> * This driver is powered by bad coffee and bombay mix. >> + * >> + * >> + * Based on Linux drivers/iommu/arm-smmu-v3.c >> + * => commit bdf95923086fb359ccb44c815724c3ace1611c90 >> + * >> + * Xen modifications: >> + * Sameer Goel <sgoel@xxxxxxxxxxxxxx> >> + * Copyright (C) 2017, The Linux Foundation, All rights reserved. >> + * >> */ >> -#include <linux/acpi.h> >> -#include <linux/acpi_iort.h> >> -#include <linux/delay.h> >> -#include <linux/dma-iommu.h> >> -#include <linux/err.h> >> -#include <linux/interrupt.h> >> -#include <linux/iommu.h> >> -#include <linux/iopoll.h> >> -#include <linux/module.h> >> -#include <linux/msi.h> >> -#include <linux/of.h> >> -#include <linux/of_address.h> >> -#include <linux/of_iommu.h> >> -#include <linux/of_platform.h> >> -#include <linux/pci.h> >> -#include <linux/platform_device.h> >> - >> -#include <linux/amba/bus.h> >> - >> -#include "io-pgtable.h" >> +#include <xen/config.h> > > This is not necessary. > >> +#include <xen/delay.h> >> +#include <xen/errno.h> >> +#include <xen/err.h> >> +#include <xen/irq.h> >> +#include <xen/lib.h> >> +#include <xen/list.h> >> +#include <xen/mm.h> >> +#include <xen/vmap.h> >> +#include <xen/rbtree.h> >> +#include <xen/sched.h> >> +#include <xen/sizes.h> >> +#include <asm/atomic.h> >> +#include <asm/device.h> >> +#include <asm/io.h> >> +#include <asm/platform.h> >> +#include <xen/acpi.h> > > Please order the includes alphabetically with xen/* first then asm/* Done. > >> + >> +typedef paddr_t phys_addr_t; >> +typedef paddr_t dma_addr_t; >> + >> +/* Alias to Xen device tree helpers */ >> +#define device_node dt_device_node >> +#define of_phandle_args dt_phandle_args >> +#define of_device_id dt_device_match >> +#define of_match_node dt_match_node >> +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, >> pname, out)) >> +#define of_property_read_bool dt_property_read_bool >> +#define of_parse_phandle_with_args dt_parse_phandle_with_args >> +#define mutex spinlock_t >> +#define mutex_init spin_lock_init >> +#define mutex_lock spin_lock >> +#define mutex_unlock spin_unlock > > mutex and spinlock are not the same. The former is sleeping whilst the later > is not. > > Can you please explain why this is fine and possibly add that in a comment? > Mutex is used to protect the access to smmu device internal data structure when setting up the s2 config and installing stes for a given device in Linux. The ste programming operation can be competitively long but in the current testing, I did not see this blocking for too long. I will put in a comment. >> + >> +/* Xen: Helpers to get device MMIO and IRQs */ >> +struct resource { >> + u64 addr; >> + u64 size; >> + unsigned int type; >> +}; > > Likely we want a compat header for defining Linux helpers. This would avoid > replicating it everywhere. Agreed. > >> + >> +#define resource_size(res) ((res)->size) >> + >> +#define platform_device device >> + >> +#define IORESOURCE_MEM 0 >> +#define IORESOURCE_IRQ 1 >> + >> +static struct resource *platform_get_resource(struct platform_device *pdev, >> + unsigned int type, >> + unsigned int num) >> +{ >> + /* >> + * The resource is only used between 2 calls of platform_get_resource. >> + * It's quite ugly but it's avoid to add too much code in the part >> + * imported from Linux >> + */ >> + static struct resource res; >> + struct acpi_iort_node *iort_node; >> + struct acpi_iort_smmu_v3 *node_smmu_data; >> + int ret = 0; >> + >> + res.type = type; >> + >> + switch (type) { >> + case IORESOURCE_MEM: >> + if (pdev->type == DEV_ACPI) { >> + ret = 1; >> + iort_node = pdev->acpi_node; >> + node_smmu_data = >> + (struct acpi_iort_smmu_v3 *)iort_node->node_data; >> + >> + if (node_smmu_data != NULL) { >> + res.addr = node_smmu_data->base_address; >> + res.size = SZ_128K; >> + ret = 0; >> + } >> + } else { >> + ret = dt_device_get_address(dev_to_dt(pdev), num, >> + &res.addr, &res.size); >> + } >> + >> + return ((ret) ? NULL : &res); >> + >> + case IORESOURCE_IRQ: >> + ret = platform_get_irq(dev_to_dt(pdev), num); > > No IRQ for ACPI? For IRQs the code calls platform_get_irq_byname. So, the IORESOURCE_IRQ implementation is not needed at all. (DT or ACPI) > >> + >> + if (ret < 0) >> + return NULL; >> + >> + res.addr = ret; >> + res.size = 1; >> + >> + return &res; >> + >> + default: >> + return NULL; >> + } >> +} >> + >> +static int platform_get_irq_byname(struct platform_device *pdev, const char >> *name) >> +{ >> + const struct dt_property *dtprop; >> + struct acpi_iort_node *iort_node; >> + struct acpi_iort_smmu_v3 *node_smmu_data; >> + int ret = 0; >> + >> + if (pdev->type == DEV_ACPI) { >> + iort_node = pdev->acpi_node; >> + node_smmu_data = (struct acpi_iort_smmu_v3 *)iort_node->node_data; >> + >> + if (node_smmu_data != NULL) { >> + if (!strcmp(name, "eventq")) >> + ret = node_smmu_data->event_gsiv; >> + else if (!strcmp(name, "priq")) >> + ret = node_smmu_data->pri_gsiv; >> + else if (!strcmp(name, "cmdq-sync")) >> + ret = node_smmu_data->sync_gsiv; >> + else if (!strcmp(name, "gerror")) >> + ret = node_smmu_data->gerr_gsiv; >> + else >> + ret = -EINVAL; >> + } >> + } else { >> + dtprop = dt_find_property(dev_to_dt(pdev), "interrupt-names", NULL); >> + if (!dtprop) >> + return -EINVAL; >> + >> + if (!dtprop->value) >> + return -ENODATA; >> + } >> + >> + return ret; >> +} >> + >> +#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \ >> +({ \ >> + s_time_t deadline = NOW() + MICROSECS(timeout_us); \ >> + for (;;) { \ >> + (val) = op(addr); \ >> + if (cond) \ >> + break; \ >> + if (NOW() > deadline) { \ >> + (val) = op(addr); \ >> + break; \ >> + } \ >> + cpu_relax(); \ > > I don't think calling cpu_relax() is correct here. I will remove cpu_relax() from here. It should not be needed. > >> + udelay(sleep_us); \ >> + } \ >> + (cond) ? 0 : -ETIMEDOUT; \ >> +}) >> + >> +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \ >> + readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us) >> + >> +/* Xen: Helpers for IRQ functions */ >> +#define request_irq(irq, func, flags, name, dev) request_irq(irq, flags, >> func, name, dev) >> +#define free_irq release_irq >> + >> +enum irqreturn { >> + IRQ_NONE = (0 << 0), >> + IRQ_HANDLED = (1 << 0), >> +}; >> + >> +typedef enum irqreturn irqreturn_t; >> + >> +/* Device logger functions */ >> +#define dev_print(dev, lvl, fmt, ...) \ >> + printk(lvl "smmu: " fmt, ## __VA_ARGS__) >> + >> +#define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## >> __VA_ARGS__) >> +#define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## >> __VA_ARGS__) >> +#define dev_warn(dev, fmt, ...) dev_print(dev, XENLOG_WARNING, fmt, ## >> __VA_ARGS__) >> +#define dev_err(dev, fmt, ...) dev_print(dev, XENLOG_ERR, fmt, ## >> __VA_ARGS__) >> +#define dev_info(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## >> __VA_ARGS__) >> + >> +#define dev_err_ratelimited(dev, fmt, ...) \ >> + dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__) >> + >> +#define dev_name(dev) dt_node_full_name(dev_to_dt(dev)) >> + >> +/* Alias to Xen allocation helpers */ >> +#define kfree xfree >> +#define kmalloc(size, flags) _xmalloc(size, sizeof(void *)) >> +#define kzalloc(size, flags) _xzalloc(size, sizeof(void *)) >> +#define devm_kzalloc(dev, size, flags) _xzalloc(size, sizeof(void *)) >> +#define kmalloc_array(size, n, flags) _xmalloc_array(size, sizeof(void >> *), n) >> + >> +/* Compatibility defines */ >> +#undef WARN_ON >> +#define WARN_ON(cond) (!!cond) > > Why do you redefine WARN_ON?Need it to be a scalar value. The Xen > implementation is a do..while. Did not want a large impact hence the local > define. I can try proposing a new common define. > >> +#define WARN_ON_ONCE(cond) WARN_ON(cond) >> Hmmm, can't we implement a common WARN_ON_ONCE? Will not really be executed. Defining it to maintain compatibility. I think this file is the right place for this define. We can send it to a generic file if so needed. > >> + >> +static void __iomem *devm_ioremap_resource(struct device *dev, >> + struct resource *res) >> +{ >> + void __iomem *ptr; >> + >> + if (!res || res->type != IORESOURCE_MEM) { >> + dev_err(dev, "Invalid resource\n"); >> + return ERR_PTR(-EINVAL); >> + } >> + >> + ptr = ioremap_nocache(res->addr, res->size); >> + if (!ptr) { >> + dev_err(dev, >> + "ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n", >> + res->addr, res->size); >> + return ERR_PTR(-ENOMEM); >> + } >> + >> + return ptr; >> +} >> + >> +/* Xen: Dummy iommu_domain */ >> +struct iommu_domain { >> + /* Runtime SMMU configuration for this iommu_domain */ >> + struct arm_smmu_domain *priv; >> + unsigned int type; >> + >> + atomic_t ref; >> + /* Used to link iommu_domain contexts for a same domain. >> + * There is at least one per-SMMU to used by the domain. >> + */ >> + struct list_head list; >> +}; > > This is very similar to the SMMU version. Could we share some bits? I will propose a common header for arm-smmu files. > >> +/* Xen: Domain type definitions. Not really needed for Xen, defining to port >> + * Linux code as-is >> + */ >> +#define IOMMU_DOMAIN_UNMANAGED 0 >> +#define IOMMU_DOMAIN_DMA 1 >> +#define IOMMU_DOMAIN_IDENTITY 2 >> + >> +/* Xen: Describes information required for a Xen domain */ >> +struct arm_smmu_xen_domain { >> + spinlock_t lock; >> + /* List of iommu domains associated to this domain */ >> + struct list_head iommu_domains; >> +}; > > Ditoo. > >> + >> +/* >> + * Xen: Information about each device stored in dev->archdata.iommu >> + * >> + * The dev->archdata.iommu stores the iommu_domain (runtime configuration of >> + * the SMMU). >> + */ >> +struct arm_smmu_xen_device { >> + struct iommu_domain *domain; >> +}; > > Ditto. > >> /* MMIO registers */ >> #define ARM_SMMU_IDR0 0x0 >> @@ -412,10 +650,12 @@ >> #define MSI_IOVA_BASE 0x8000000 >> #define MSI_IOVA_LENGTH 0x100000 >> +#if 0 /* Not applicable for Xen */ > > While the module_param_name() is not applicable in Xen, I don't see any > reason to remove the variable. Agreed. > >> static bool disable_bypass; > module_param_named(disable_bypass, >> disable_bypass, bool, S_IRUGO); >> MODULE_PARM_DESC(disable_bypass, >> "Disable bypass streams such that incoming transactions from devices >> that are not attached to an iommu domain will report an abort back to the >> device and will not be allowed to pass through the SMMU."); >> +#endif >> enum pri_resp { >> PRI_RESP_DENY, >> @@ -423,6 +663,7 @@ enum pri_resp { >> PRI_RESP_SUCC, >> }; >> +#if 0 /* Xen: No MSI support in this iteration */ >> enum arm_smmu_msi_index { >> EVTQ_MSI_INDEX, >> GERROR_MSI_INDEX, >> @@ -447,6 +688,7 @@ static phys_addr_t >> arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = { >> ARM_SMMU_PRIQ_IRQ_CFG2, >> }, >> }; >> +#endif >> struct arm_smmu_cmdq_ent { >> /* Common fields */ >> @@ -551,6 +793,8 @@ struct arm_smmu_s2_cfg { >> u16 vmid; >> u64 vttbr; >> u64 vtcr; >> + /* Xen: Domain associated to this configuration */ >> + struct domain *domain; >> }; >> struct arm_smmu_strtab_ent { >> @@ -623,9 +867,20 @@ struct arm_smmu_device { >> struct arm_smmu_strtab_cfg strtab_cfg; >> /* IOMMU core code handle */ >> - struct iommu_device iommu; >> + //struct iommu_device iommu; > > #if 0 but no // please. Done. > >> + >> + /* Xen: Need to keep a list of SMMU devices */ >> + struct list_head devices; >> }; >> +/* Xen: Keep a list of devices associated with this driver */ >> +static DEFINE_SPINLOCK(arm_smmu_devices_lock); >> +static LIST_HEAD(arm_smmu_devices); >> +/* Xen: Helper for finding a device using fwnode */ >> +static >> +struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle >> *fwnode); >> + >> + >> /* SMMU private data for each master */ >> struct arm_smmu_master_data { >> struct arm_smmu_device *smmu; >> @@ -642,7 +897,7 @@ enum arm_smmu_domain_stage { >> struct arm_smmu_domain { >> struct arm_smmu_device *smmu; >> - struct mutex init_mutex; /* Protects smmu pointer */ >> + mutex init_mutex; /* Protects smmu pointer */ >> struct io_pgtable_ops *pgtbl_ops; >> spinlock_t pgtbl_lock; >> @@ -737,15 +992,16 @@ static void queue_inc_prod(struct arm_smmu_queue *q) >> */ >> static int queue_poll_cons(struct arm_smmu_queue *q, bool drain, bool wfe) >> { >> - ktime_t timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US); >> + s_time_t deadline = NOW() + MICROSECS(ARM_SMMU_POLL_TIMEOUT_US); > > Please introduce proper wrappers to avoid the modification of the code. Done. > >> while (queue_sync_cons(q), (drain ? !queue_empty(q) : >> queue_full(q))) { >> - if (ktime_compare(ktime_get(), timeout) > 0) >> + >> + if (NOW() > deadline) > > Ditto. Done. > >> return -ETIMEDOUT; >> - if (wfe) { >> + if (wfe) > > Please avoid to drop { > >> wfe(); >> - } else { > > Ditto. Done. > >> + else { >> cpu_relax(); > > Hmmm I now see why you added cpu_relax() at the top. Well, on Xen cpu_relax > is just a barrier. On Linux it is used to yield. > > And that bit is worrying me. The Linux code will allow context switching to > another tasks if the code is taking too much time. > > Xen is not preemptible, so is it fine? This is used when consuming the command queue and could be a potential performance issue if the queue is large. (This is never the case). I am wondering if we should define a yeild in long run? > >> udelay(1); >> } >> @@ -931,7 +1187,7 @@ static void arm_smmu_cmdq_issue_cmd(struct >> arm_smmu_device *smmu, >> dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n"); >> spin_unlock_irqrestore(&smmu->cmdq.lock, flags); >> } >> - >> +#if 0 > > Please avoid dropping newline and explain why the #if 0. Done. > >> /* Context descriptor manipulation functions */ >> static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr) >> { >> @@ -974,7 +1230,7 @@ static void arm_smmu_write_ctx_desc(struct >> arm_smmu_device *smmu, >> cfg->cdptr[3] = cpu_to_le64(cfg->cd.mair << CTXDESC_CD_3_MAIR_SHIFT); >> } >> - >> +#endif > > Ditto for the newline. Done. > >> /* Stream table manipulation functions */ >> static void >> arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc >> *desc) >> @@ -1044,7 +1300,7 @@ static void arm_smmu_write_strtab_ent(struct >> arm_smmu_device *smmu, u32 sid, >> ste_live = true; >> break; >> case STRTAB_STE_0_CFG_ABORT: >> - if (disable_bypass) >> + //No bypass override for Xen > > Why no leaving the variable on top with a comment. This would avoid such > change. Done. > >> break; >> default: >> BUG(); /* STE corruption */ >> @@ -1056,7 +1312,7 @@ static void arm_smmu_write_strtab_ent(struct >> arm_smmu_device *smmu, u32 sid, >> /* Bypass/fault */ >> if (!ste->assigned || !(ste->s1_cfg || ste->s2_cfg)) { >> - if (!ste->assigned && disable_bypass) >> + if (!ste->assigned) > > Ditto. Done. > >> val |= STRTAB_STE_0_CFG_ABORT; >> else >> val |= STRTAB_STE_0_CFG_BYPASS; >> @@ -1135,16 +1391,20 @@ static int arm_smmu_init_l2_strtab(struct >> arm_smmu_device *smmu, u32 sid) >> void *strtab; >> struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; >> struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> >> STRTAB_SPLIT]; >> + u32 alignment = 0; >> if (desc->l2ptr) >> return 0; >> - size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3); >> + size = 1 << (STRTAB_SPLIT + LOG_2(STRTAB_STE_DWORDS) + 3); > > I would prefer if you introduce ilog2. Done. > >> strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS]; >> desc->span = STRTAB_SPLIT + 1; >> - desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma, >> - GFP_KERNEL | __GFP_ZERO); >> + >> + alignment = 1 << ((5 + (desc->span - 1))); > > Do you mind explaining the 5? Also, does the shift will always be < 32? Its from the "Level 1 Stream Table Descriptor" from SMMU spec. I can add the ref to the spec. Yes the span is hard coded in the driver and with any spec valid span values this cannot exceed 32. I have added a comment tot he code > >> + desc->l2ptr = _xzalloc(size, alignment); >> + desc->l2ptr_dma = virt_to_maddr(desc->l2ptr); > > _xzalloc can fail and virt_to_maddr will result to a panic. > I will move the check. >> + >> if (!desc->l2ptr) { >> dev_err(smmu->dev, >> "failed to allocate l2 stream table for SID %u\n", >> @@ -1158,7 +1418,7 @@ static int arm_smmu_init_l2_strtab(struct >> arm_smmu_device *smmu, u32 sid) >> } >> /* IRQ and event handlers */ >> -static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) >> +static void arm_smmu_evtq_thread(int irq, void *dev, struct cpu_user_regs >> *regs) > > Could you please introduce a wrapper instead as it was done in smmu.c? > Done. >> { >> int i; >> struct arm_smmu_device *smmu = dev; >> @@ -1186,7 +1446,6 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void >> *dev) >> /* Sync our overflow flag, as we believe we're up to speed */ >> q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons); >> - return IRQ_HANDLED; >> } >> static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt) >> @@ -1203,7 +1462,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device >> *smmu, u64 *evt) >> dev_info(smmu->dev, "unexpected PRI request received:\n"); >> dev_info(smmu->dev, >> - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova >> 0x%016llx\n", >> + "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova >> 0x%016lx\n", > > Hmmm why? The variable is defined as ungined long long in Linux. ( uses generic int-ll64.h) In Xen the definition changes to unsigned long for ARM_64 which actually makes sense. > >> sid, ssid, grpid, last ? "L" : "", >> evt[0] & PRIQ_0_PERM_PRIV ? "" : "un", >> evt[0] & PRIQ_0_PERM_READ ? "R" : "", >> @@ -1227,7 +1486,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device >> *smmu, u64 *evt) >> } >> } >> -static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) >> +static void arm_smmu_priq_thread(int irq, void *dev, struct cpu_user_regs >> *regs) > > Ditto about the prototype. Done. > >> { >> struct arm_smmu_device *smmu = dev; >> struct arm_smmu_queue *q = &smmu->priq.q; >> @@ -1243,18 +1502,16 @@ static irqreturn_t arm_smmu_priq_thread(int irq, >> void *dev) >> /* Sync our overflow flag, as we believe we're up to speed */ >> q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons); >> - return IRQ_HANDLED; >> } >> -static irqreturn_t arm_smmu_cmdq_sync_handler(int irq, void *dev) >> +static void arm_smmu_cmdq_sync_handler(int irq, void *dev, struct >> cpu_user_regs *regs) > > Ditto. Done. > >> { >> /* We don't actually use CMD_SYNC interrupts for anything */ >> - return IRQ_HANDLED; >> } >> static int arm_smmu_device_disable(struct arm_smmu_device *smmu); >> -static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) >> +static void arm_smmu_gerror_handler(int irq, void *dev, struct >> cpu_user_regs *regs) > > Ditto Done. > >> { >> u32 gerror, gerrorn, active; >> struct arm_smmu_device *smmu = dev; >> @@ -1264,7 +1521,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, >> void *dev) >> active = gerror ^ gerrorn; >> if (!(active & GERROR_ERR_MASK)) >> - return IRQ_NONE; /* No errors pending */ >> + return; /* No errors pending */ >> dev_warn(smmu->dev, >> "unexpected global error reported (0x%08x), this could be >> serious\n", >> @@ -1286,7 +1543,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, >> void *dev) >> if (active & GERROR_MSI_CMDQ_ABT_ERR) { >> dev_warn(smmu->dev, "CMDQ MSI write aborted\n"); >> - arm_smmu_cmdq_sync_handler(irq, smmu->dev); >> + arm_smmu_cmdq_sync_handler(irq, smmu->dev, NULL); >> } >> if (active & GERROR_PRIQ_ABT_ERR) >> @@ -1299,7 +1556,6 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, >> void *dev) >> arm_smmu_cmdq_skip_err(smmu); >> writel(gerror, smmu->base + ARM_SMMU_GERRORN); >> - return IRQ_HANDLED; >> } >> /* IO_PGTABLE API */ >> @@ -1311,11 +1567,13 @@ static void __arm_smmu_tlb_sync(struct >> arm_smmu_device *smmu) >> arm_smmu_cmdq_issue_cmd(smmu, &cmd); >> } >> +#if 0 /*Xen: Unused function */ >> static void arm_smmu_tlb_sync(void *cookie) >> { >> struct arm_smmu_domain *smmu_domain = cookie; >> __arm_smmu_tlb_sync(smmu_domain->smmu); >> } >> +#endif >> static void arm_smmu_tlb_inv_context(void *cookie) >> { >> @@ -1336,6 +1594,7 @@ static void arm_smmu_tlb_inv_context(void *cookie) >> __arm_smmu_tlb_sync(smmu); >> } >> +#if 0 /*Xen: Unused functionality */ >> static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, >> size_t granule, bool leaf, void *cookie) >> { >> @@ -1362,7 +1621,7 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned >> long iova, size_t size, >> } while (size -= granule); >> } >> -static const struct iommu_gather_ops arm_smmu_gather_ops = { >> +static struct iommu_gather_ops arm_smmu_gather_ops = { >> .tlb_flush_all = arm_smmu_tlb_inv_context, >> .tlb_add_flush = arm_smmu_tlb_inv_range_nosync, >> .tlb_sync = arm_smmu_tlb_sync, >> @@ -1380,6 +1639,11 @@ static bool arm_smmu_capable(enum iommu_cap cap) >> return false; >> } >> } >> +#endif >> +/* Xen: Stub out DMA domain related functions */ >> +#define iommu_get_dma_cookie(dom) 0 >> +#define iommu_put_dma_cookie(dom) 0 > > Please stub them at the top of the file. Done. > >> + >> static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) >> { >> @@ -1410,6 +1674,7 @@ static struct iommu_domain >> *arm_smmu_domain_alloc(unsigned type) >> return &smmu_domain->domain; >> } >> +#if 0 > > Please explain the #if 0 > Done. >> static int arm_smmu_bitmap_alloc(unsigned long *map, int span) >> { >> int idx, size = 1 << span; >> @@ -1427,36 +1692,20 @@ static void arm_smmu_bitmap_free(unsigned long *map, >> int idx) >> { >> clear_bit(idx, map); >> } >> +#endif >> static void arm_smmu_domain_free(struct iommu_domain *domain) >> { >> struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); >> - struct arm_smmu_device *smmu = smmu_domain->smmu; >> - >> - iommu_put_dma_cookie(domain); >> - free_io_pgtable_ops(smmu_domain->pgtbl_ops); >> - >> - /* Free the CD and ASID, if we allocated them */ >> - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { >> - struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; >> - >> - if (cfg->cdptr) { >> - dmam_free_coherent(smmu_domain->smmu->dev, >> - CTXDESC_CD_DWORDS << 3, >> - cfg->cdptr, >> - cfg->cdptr_dma); >> - >> - arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid); >> - } >> - } else { >> - struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; >> - if (cfg->vmid) >> - arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid); >> - } >> + /* >> + * Xen: Remove the free functions that are not used and code related >> + * to S1 translation. We just need to free the domain here. >> + */ > > Please use #if 0 rather than removing the code + comment on top. But I am not > sure why you drop the S2 free code. Shouldn't we allocate VMID from the SMMU? I am picking up the vmid from the domain I am sharing the page tables with. domain->arch.p2m.vmid. This seemed valid. Please let me know if we want to generate a different vmid? > >> kfree(smmu_domain); >> } >> +#if 0 >> static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, >> struct io_pgtable_cfg *pgtbl_cfg) >> { >> @@ -1488,33 +1737,30 @@ out_free_asid: >> arm_smmu_bitmap_free(smmu->asid_map, asid); >> return ret; >> } >> +#endif >> -static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain >> *smmu_domain, >> - struct io_pgtable_cfg *pgtbl_cfg) >> +static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain) >> { >> - int vmid; >> - struct arm_smmu_device *smmu = smmu_domain->smmu; >> struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; >> - vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits); >> - if (vmid < 0) >> - return vmid; >> + /* Xen: Set the values as needed */ >> - cfg->vmid = (u16)vmid; >> - cfg->vttbr = pgtbl_cfg->arm_lpae_s2_cfg.vttbr; >> - cfg->vtcr = pgtbl_cfg->arm_lpae_s2_cfg.vtcr; >> + cfg->vmid = cfg->domain->arch.p2m.vmid; > > See my comment above. I am wondering why we are considering this value invalid? Since the page tables are shared we can use the vmid from the domain. Is the concern regarding the size? Thanks, Sameer > >> + cfg->vttbr = page_to_maddr(cfg->domain->arch.p2m.root); >> + cfg->vtcr = READ_SYSREG32(VTCR_EL2); > > This looks a bit suspicious. Looking at the specs, the bits does not seem to > correspond here. > > I will look at the rest either tomorrow or Monday. > > Cheers, > -- Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |