[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



Hi,

At 08:56 +0000 on 06 Jan (1420530995), Tian, Kevin wrote:
> > From: Tim Deegan [mailto:tim@xxxxxxx]
> > At 07:24 +0000 on 12 Dec (1418365491), Tian, Kevin wrote:
> > > but just to confirm one point. from my understanding whether it's a
> > > mapping operation doesn't really matter. We can invent an interface
> > > to get p2m mapping and then increase refcnt. the key is refcnt here.
> > > when XenGT constructs a shadow GPU page table, it creates a reference
> > > to guest memory page so the refcnt must be increased. :-)
> > 
> > True. :)  But Xen does need to remember all the refcounts that were
> > created (so it can tidy up if the domain crashes).  If Xen is already
> > doing that it might as well do it in the IOMMU tables since that
> > solves other problems.
> 
> would a refcnt in p2m layer enough so we don't need separate refcnt in both
> EPT and IOMMU page table?

Yes, that sounds right.  The p2m layer is actually the same as the EPT
table, so that is where the refcount should be attached to (and it
shouldn't matter whether the IOMMU page tables are shared or not).

> yes, that's the hard part requiring experiments to find a good balance
> between complexity and performance. IOMMU page table is not designed 
> with same frequent modifications as CPU/GPU page tables, but following
> above trend make them connected. Another option might be reserve a big
> enough BFNs to cover all available guest memory at boot time, so to
> eliminate run-time modification overhead.

Sure, or you can map them on demend but keep a cache of maps to avoid
unmapping between uses. 

> > Not really.  The IOMMU tables are also 64-bit so there must be enough
> > addresses to map all of RAM.  There shouldn't be any need for these
> > mappings to be _contiguous_, btw.  You just need to have one free
> > address for each mapping.  Again, following how grant maps work, I'd
> > imagine that PVH guests will allocate an unused GFN for each mapping
> > and do enough bookkeeping to make sure they don't clash with other GFN
> > users (grant mapping, ballooning, &c).  PV guests will probably be
> > given a BFN by the hypervisor at map time (which will be == MFN in
> > practice) and just needs to pass the same BFN to the unmap call later
> > (it can store it in the GTT meanwhile).
> 
> if possible prefer to make both consistent, i.e. always finding unused GFN?

I don't think it will be possible.  PV domains are already using BFNs
supplied by Xen (in fact == MFN) for backend grant mappings, which
would conflict with supplying their own for these mappings.  But
again, I think the kernel maintainers for Xen may have a better idea
of how these interfaces are used inside the kernel.  For example,
it might be easy enough to wrap the two systems inside a common API
inside linux.   Again, following how grant mapping works seems like
the way forward.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.