[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 5/5] xen/arm: smmuv1: Intelligent SMR allocation
Hello Julien, > On 20 Mar 2021, at 12:01 pm, Julien Grall <julien@xxxxxxx> wrote: > > > > On 16/03/2021 17:54, Rahul Singh wrote: >> Hello Julien, > > Hi Rahul, > >>> On 16 Mar 2021, at 3:08 pm, Julien Grall <julien@xxxxxxx> wrote: >>> >>> Hi Rahul, >>> >>> On 09/03/2021 18:19, Rahul Singh wrote: >>>> Backport 588888a7399db352d2b1a41c9d5b3bf0fd482390 >>>> "iommu/arm-smmu: Intelligent SMR allocation" from the Linux kernel >>>> This patch fix the stream match conflict issue when two devices have the >>>> same stream-id. >>>> Only difference while applying this patch is to use spinlock in place of >>>> mutex and move iommu_group_alloc(..) function call in >>>> arm_smmu_add_device(..) function from the start of the function >>>> to the end. >>> >>> As you may remember the discussion on the SMMUv3 thread, replacing a >>> spinlock by a mutex has consequences. Can you explain why this is fine? >> Yes, I remember the discussion on the SMMUv3 thread, replacing a spinlock >> with a mutex has consequences. >> I replaced the mutex with spinlock in the SMMUv1 code when we are >> configuring the SMMUv1 hardware arm_smmu_master_alloc_smes(..). >> I think it is fine to use the spinlock in place of mutex in SMMUv1 where we >> are configuring the hardware via registers as compared to SMMUv3 where we >> are configuring the SMMUv3 hardware with Memory-based circular buffer >> queues. Configuring the hardware via queues might take a long time that why >> mutex is a good choice but if we are configuring the hardware via registers >> it is very fast. >> Configuring the SMMUv1 with the register is very fast and there are no side >> effects of this if we use spinlock. Let me know your view on this. > > This looks fine. Can you explain it the commit message? Yes, I will add the explanation in the commit message and will send the v2. Regards, Rahul > > Cheers, > > -- > Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |