[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: fix gnttab_need_iommu_mapping


  • To: Julien Grall <julien@xxxxxxx>
  • From: Rahul Singh <Rahul.Singh@xxxxxxx>
  • Date: Tue, 9 Feb 2021 13:10:07 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=F+iMAGOzN1j9tIkmGuC+qwNoR956aFcvV7NuUkUY7Pg=; b=R3xIY/rUGkWpjBOR9aQ7yKNijUWKjfjOGtPUizePK9Ig3MY7uNXZUv5cuqOjN/JrsD9EMQh96qiSRT6w+EO0JoqjYnUIEhQX0jYkWrwPTNq9MN7OoJibgdxrrroG0BcdMIAlNH2/5XAN69lRgmgFnR4hUH0ogZV85UG2mQ82gbTXnJvCrIFVTib6dVK/n2RaaH7VgLEMBLzcKMa6b3KncKTHXUJmi1ipimhMGQSU4lSPP2lY1/O1OZ9cJzQYws9HGrNxiCENqZuqiK/oWrmjkZhlORke4MRu3H89CNFVPUlnONcnJ8FIyiNpbmNjWO4BEZ2D47H+VexAhnQ3yBBFzw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Tb4H5nOyjei9476wSmvoSPVbrwQW2Xdf8Krk5xEjMqx5ppqFooIxYka/e1NS1iPDLBmWQlnIJX8vReqyWtm2Ubw8/3ZrAA9J0RlEMzcZ2Cjt50CfYRGPO9qJChrX8yYKff1vpT06AdmI+EIzSuApO+WDhwzbwLotAkIzWIVrcR07/mblx3Ux0l0A1sP9c6iI1c70zz5gM4vCjsTwKyN+sDFf2sTe0ivLe+orqCeHlUpAOtJ5WrDvJGo7/YuObBY5JnLwvR4RIpK/TExjRBozQ8robg99znstG8pIM77nqPN4vX/sqGbKdH9IIqVyes1YmBjSex3sInTsF4oDXS8z1g==
  • Authentication-results-original: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, "lucmiccio@xxxxxxxxx" <lucmiccio@xxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Tue, 09 Feb 2021 13:10:23 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: xen.org; dkim=none (message not signed) header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHW/CCVIBhvrRIcEE2wSNDNMu6y5KpOkgeAgAABdICAAAIFgIAACHWAgAEzkIA=
  • Thread-topic: [PATCH] xen/arm: fix gnttab_need_iommu_mapping

Hello Julien,

> On 8 Feb 2021, at 6:49 pm, Julien Grall <julien@xxxxxxx> wrote:
> 
> 
> 
> On 08/02/2021 18:19, Rahul Singh wrote:
>> Hello Julien,
> 
> Hi Rahul,
> 
>>> On 8 Feb 2021, at 6:11 pm, Julien Grall <julien@xxxxxxx> wrote:
>>> 
>>> 
>>> 
>>> On 08/02/2021 18:06, Rahul Singh wrote:
>>>>> On 6 Feb 2021, at 12:38 am, Stefano Stabellini <sstabellini@xxxxxxxxxx> 
>>>>> wrote:
>>>>> 
>>>>> Commit 91d4eca7add broke gnttab_need_iommu_mapping on ARM.
>>>>> The offending chunk is:
>>>>> 
>>>>> #define gnttab_need_iommu_mapping(d)                    \
>>>>> -    (is_domain_direct_mapped(d) && need_iommu(d))
>>>>> +    (is_domain_direct_mapped(d) && need_iommu_pt_sync(d))
>>>>> 
>>>>> On ARM we need gnttab_need_iommu_mapping to be true for dom0 when it is
>>>>> directly mapped, like the old check did, but the new check is always
>>>>> false.
>>>>> 
>>>>> In fact, need_iommu_pt_sync is defined as dom_iommu(d)->need_sync and
>>>>> need_sync is set as:
>>>>> 
>>>>>    if ( !is_hardware_domain(d) || iommu_hwdom_strict )
>>>>>        hd->need_sync = !iommu_use_hap_pt(d);
>>>>> 
>>>>> iommu_hwdom_strict is actually supposed to be ignored on ARM, see the
>>>>> definition in docs/misc/xen-command-line.pandoc:
>>>>> 
>>>>>    This option is hardwired to true for x86 PVH dom0's (as RAM belonging 
>>>>> to
>>>>>    other domains in the system don't live in a compatible address space), 
>>>>> and
>>>>>    is ignored for ARM.
>>>>> 
>>>>> But aside from that, the issue is that iommu_use_hap_pt(d) is true,
>>>>> hence, hd->need_sync is false, and gnttab_need_iommu_mapping(d) is false
>>>>> too.
>>>>> 
>>>>> As a consequence, when using PV network from a domU on a system where
>>>>> IOMMU is on from Dom0, I get:
>>>>> 
>>>>> (XEN) smmu: /smmu@fd800000: Unhandled context fault: fsr=0x402, 
>>>>> iova=0x8424cb148, fsynr=0xb0001, cb=0
>>>>> [   68.290307] macb ff0e0000.ethernet eth0: DMA bus error: HRESP not OK
>>>> I also observed the IOMMU fault when DOMU guest is created and great table 
>>>> is used when IOMMU is enabled. I fixed the error in different way but I am 
>>>> not sure if you also observing the same error. I submitted the patch to 
>>>> pci-passthrough integration branch. Please have a look once if that make 
>>>> sense.
>>> 
>>> I belive this is the same error as Stefano has observed. However, your 
>>> patch will unfortunately not work if you have a system with a mix of 
>>> protected and non-protected DMA-capable devices.
>> Yes you are right thats what I though when I fixed the error but then I 
>> thought in different direction if IOMMU is enabled system wise every device 
>> should be protected by IOMMU.
> I am not aware of any rule preventing a mix of protected and unprotected 
> DMA-capable devices.
> 
> However, even if they are all protected by an IOMMU, some of the IOMMUs may 
> have been disabled by the firmware tables for various reasons (e.g. 
> performance, buggy SMMU...). For instance, this is the case on Juno where 2 
> out of 3 SMMUs are disabled in the Linux upstream DT.
> 
> As we don't know which device will use the grant for DMA, we always need to 
> return the machine physical address.

Thanks for the information for clearing my doubts. 

Now I understand that we need to return the machine physical address. I fixed 
the issue when there is no IOMMU mapping call for grant pages. I thought if 
page tables is not shared between IOMMU and CPU, in that scenario only we can 
add the mapping for the grant table in IOMMU page tables by calling
iommu_map/iommu_unmap functions. That’s why I fixed the issue to return IPA as 
there is no (mfn -> mfn) mapping in IOMMU page table for DMA. After this patch 
we don’t need to return the IPA as (mfn -> mfn) mapping will be present in P2M.

Regards,
Rahul

> 
> Cheers,
> 
> -- 
> Julien Grall


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.