[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Unshared IOMMU issues
>>> On 16.02.17 at 16:02, <olekstysh@xxxxxxxxx> wrote: > On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote: >>>>> On 15.02.17 at 18:43, <olekstysh@xxxxxxxxx> wrote: >>> 1. >>> I need: >>> Allow P2M core on ARM to update IOMMU mapping from the first >>> "p2m_set_entry". >>> I do: >>> I explicitly set need_iommu flag for *every* guest domain during >>> iommu_domain_init() on ARM in case if page table is not shared. >>> At that moment I have no knowledge about will any device be assigned >>> to this domain or not. I am just want to receive all mapping updates >>> from P2M code. The P2M will update IOMMU mapping only when need_iommu >>> is set and page table is not shared. >>> I have doubts: >>> Is it correct to just force need_iommu flag? >> >> No, I don't think so. This is a waste of a measurable amount of >> resources when page tables aren't shared. >> >>> Or maybe another flag should be introduced? >> >> Not sure what you think of here. Where's the problem with building >> IOMMU page tables at the time the first device gets assigned, just >> like x86 does? > OK, I have already had a look at arch_iommu_populate_page_table() for x86. > I don't know at the moment how this solution can help me. > There are a least two points the prevent me from doing the similar thing. > 1. For create IOMMU mapping I need both mfn and gfn. (+ flags). > I am able to get mfn only. How can I find corresponding gfn? As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have this, perhaps it needs to gain it? > 2. The d->page_list seems only contains domain RAM (not 100% sure). > Where can I get other regions (mmios, etc)? These necessarily are being tracked for the domain, so you need to take them from wherever they're stored on ARM. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |