[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Unshared IOMMU issues



On Thu, 16 Feb 2017, Julien Grall wrote:
> Hi Jan,
> 
> On 16/02/17 16:34, Jan Beulich wrote:
> > > > > On 16.02.17 at 17:11, <julien.grall@xxxxxxx> wrote:
> > > On 16/02/17 15:52, Jan Beulich wrote:
> > > > > > > On 16.02.17 at 16:02, <olekstysh@xxxxxxxxx> wrote:
> > > > > On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@xxxxxxxx>
> > > > > wrote:
> > > > > > > > > On 15.02.17 at 18:43, <olekstysh@xxxxxxxxx> wrote:
> > > > > > > 1.
> > > > > > > I need:
> > > > > > > Allow P2M core on ARM to update IOMMU mapping from the first
> > > > > > > "p2m_set_entry".
> > > > > > > I do:
> > > > > > > I explicitly set need_iommu flag for *every* guest domain during
> > > > > > > iommu_domain_init() on ARM in case if page table is not shared.
> > > > > > > At that moment I have no knowledge about will any device be
> > > > > > > assigned
> > > > > > > to this domain or not. I am just want to receive all mapping
> > > > > > > updates
> > > > > > > from P2M code. The P2M will update IOMMU mapping only when
> > > > > > > need_iommu
> > > > > > > is set and page table is not shared.
> > > > > > > I have doubts:
> > > > > > > Is it correct to just force need_iommu flag?
> > > > > > 
> > > > > > No, I don't think so. This is a waste of a measurable amount of
> > > > > > resources when page tables aren't shared.
> > > > > > 
> > > > > > > Or maybe another flag should be introduced?
> > > > > > 
> > > > > > Not sure what you think of here. Where's the problem with building
> > > > > > IOMMU page tables at the time the first device gets assigned, just
> > > > > > like x86 does?
> > > > > OK, I have already had a look at  arch_iommu_populate_page_table() for
> > > > > x86.
> > > > > I don't know at the moment how this solution can help me.
> > > > > There are a least two points the prevent me from doing the similar
> > > > > thing.
> > > > > 1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
> > > > > I am able to get mfn only. How can I find corresponding gfn?
> > > > 
> > > > As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
> > > > this, perhaps it needs to gain it?
> > > 
> > > Looking at the x86 implementation, mfn_to_gmfn is using a table for that
> > > indexed by the MFN. This is requiring virtual address space that is
> > > already scarce on ARM32 and also using physical memory.
> > > 
> > > I am not convinced this is the right things to do on ARM as the only
> > > user so far will be the IOMMU code.
> > > 
> > > Another solution would be to go through the stage-2 page table and
> > > replicate all the mappings.
> > 
> > That's certainly an option, if you want to save the memory (and
> > VA space on ARM32). It only makes the x86 model of establishing
> > the mappings slightly more compute intensive.
> 
> I made a quick calculation, ARM32 supports up 40-bit PA and IPA (e.g guest
> address), which means 28-bits of MFN/GFN. The GFN would have to be stored in a
> 32-bit for alignment, so we would need 2^28 * 4 = 1GiB of virtual address
> space and potentially physical memory.
> We don't have 1GB of VA space free on 32-bit right now.
> 
> ARM64 currently supports up to 48-bit PA and 48-bit IPA, which means 36-bits
> of MFN/GFN. The GFN would have to be stored in 64-bit for alignment, so we
> would need 2^36 * 8 = 512GiB of virtual address space and potentially physical
> memory. While virtual address space is not a problem, the memory is a problem
> for embedded platform. We want Xen to be as lean as possible.

I think you are right that it's best not to introduce mfn-to-gfn
tracking on ARM.


> I though a bit more on the advantage to create the IOMMU page tables later on.
> 
> For devices assigned at domain creation, we know that devices will be assigned
> so we could let Xen and populated IOMMU while allocating the memory for the
> domain.
> 
> For hotplug devices, this would only happen for PCI as integrated devices
> cannot be hotplug. As we go towards emulating a root complex in Xen rather
> than the PV approach, you would need the root complex to be instantiated when
> the domain is created (unless we want to hotplug too?). IHMO, if you assign a
> root complex is likely that you will want to assign a PCI afterwards. So
> allocating page tables at that time sounds sensible.
> 
> This would avoid to walk the stage-2 page tables at runtime.
> 
> Any opinions?

Obviously, static device assignment is not a problem. The issue is only
hotplug, which today we don't support.

Like you say, hotplug by definition requires a discoverable bus of some
sort. For example PCI. When we introduce it in guests, we'll also
introduce IOMMU pagetables. The only downside of this idea, is that it
will require users to write something in the VM config file, for example
pci=[''], just to reserve the right to do pci hotplug at some point in
the future. This is not the case today on x86. It's not great, but I
cannot see a way around it, given that we probably don't want to
introduce a root complex in all ARM guests by default anyway.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.