[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a device
> Hi Mykyta, > >> On 16 Jun 2022, at 8:48 am, Mykyta Poturai <mykyta.poturai@xxxxxxxxx> wrote: >> >> Hi Julien, Rahul >> I've encountered a similar problem with IMX8 GPU recently. It wasn't probing >> properly after the domain reboot. After some digging, I came to the same >> solution as Rahul and found this thread. I also encountered the occasional >> "Unexpected global fault, this could be serious" error message when >> destroying >> a domain with an actively-working GPU. >> >>> Hmmmm.... Looking at the code, arm_smmu_alloc_smes() doesn't seem to use >>> the domain information. So why would it need to be done every time it is >>> assigned? >> Indeed after removing the arm_smmu_master_free_smes() call, both reboot and >> global >> fault issues are gone. If I understand correctly, device removing is not yet >> supported, so I can't find a proper place for the >> arm_smmu_master_free_smes() call. >> Should we remove the function completely or just left it commented for later >> or >> something else? >> >> Rahul, are you still working on this or could I send my patch? > > Yes, I have this on my to-do list but I was busy with other work and it got > delayed. > > I created another solution for this issue, in which we don’t need to call > arm_smmu_master_free_smes() > in arm_smmu_detach_dev() but we can configure the s2cr value to type fault in > detach function. > > Will call new function arm_smmu_domain_remove_master() in detach function > that will revert the changes done > by arm_smmu_domain_add_master() in attach function. > > I don’t have any board to test the patch. If it is okay, Could you please > test the patch and let me know the result. > > diff --git a/xen/drivers/passthrough/arm/smmu.c > b/xen/drivers/passthrough/arm/smmu.c > index 69511683b4..da3adf8e7f 100644 > --- a/xen/drivers/passthrough/arm/smmu.c > +++ b/xen/drivers/passthrough/arm/smmu.c > @@ -1598,21 +1598,6 @@ out_err: > return ret; > } > > -static void arm_smmu_master_free_smes(struct arm_smmu_master_cfg *cfg) > -{ > - struct arm_smmu_device *smmu = cfg->smmu; > - int i, idx; > - struct iommu_fwspec *fwspec = arm_smmu_get_fwspec(cfg); > - > - spin_lock(&smmu->stream_map_lock); > - for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) { > - if (arm_smmu_free_sme(smmu, idx)) > - arm_smmu_write_sme(smmu, idx); > - cfg->smendx[i] = INVALID_SMENDX; > - } > - spin_unlock(&smmu->stream_map_lock); > -} > - > static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain, > struct arm_smmu_master_cfg *cfg) > { > @@ -1635,6 +1620,20 @@ static int arm_smmu_domain_add_master(struct > arm_smmu_domain *smmu_domain, > return 0; > } > > +static void arm_smmu_domain_remove_master(struct arm_smmu_domain > *smmu_domain, > + struct arm_smmu_master_cfg *cfg) > +{ > + struct arm_smmu_device *smmu = smmu_domain->smmu; > + struct arm_smmu_s2cr *s2cr = smmu->s2crs; > + struct iommu_fwspec *fwspec = arm_smmu_get_fwspec(cfg); > + int i, idx; > + > + for_each_cfg_sme(cfg, i, idx, fwspec->num_ids) { > + s2cr[idx] = s2cr_init_val; > + arm_smmu_write_s2cr(smmu, idx); > + } > +} > + > static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device > *dev) > { > int ret; > @@ -1684,10 +1683,11 @@ static int arm_smmu_attach_dev(struct iommu_domain > *domain, struct device *dev) > > static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device > *dev) > { > + struct arm_smmu_domain *smmu_domain = domain->priv; > struct arm_smmu_master_cfg *cfg = find_smmu_master_cfg(dev); > > if (cfg) > - arm_smmu_master_free_smes(cfg); > + return arm_smmu_domain_remove_master(smmu_domain, cfg); > > } > > Regards, > Rahul Hello Rahul, For me, this patch fixed the issue with the GPU not probing after domain reboot. But not fixed the "Unexpected Global fault" that occasionally happens when destroying the domain with an actively working GPU. Although, I am not sure if this issue is relevant here. Regards, Mykyta
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |