[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



On Fri, Jan 09, 2015 at 08:02:48AM +0000, Tian, Kevin wrote:
> > From: Tim Deegan [mailto:tim@xxxxxxx]
> > Sent: Thursday, January 08, 2015 8:43 PM
> > 
> > Hi,
> > 
> > > > Not really.  The IOMMU tables are also 64-bit so there must be enough
> > > > addresses to map all of RAM.  There shouldn't be any need for these
> > > > mappings to be _contiguous_, btw.  You just need to have one free
> > > > address for each mapping.  Again, following how grant maps work, I'd
> > > > imagine that PVH guests will allocate an unused GFN for each mapping
> > > > and do enough bookkeeping to make sure they don't clash with other GFN
> > > > users (grant mapping, ballooning, &c).  PV guests will probably be
> > > > given a BFN by the hypervisor at map time (which will be == MFN in
> > > > practice) and just needs to pass the same BFN to the unmap call later
> > > > (it can store it in the GTT meanwhile).
> > >
> > > if possible prefer to make both consistent, i.e. always finding unused 
> > > GFN?
> > 
> > I don't think it will be possible.  PV domains are already using BFNs
> > supplied by Xen (in fact == MFN) for backend grant mappings, which
> > would conflict with supplying their own for these mappings.  But
> > again, I think the kernel maintainers for Xen may have a better idea
> > of how these interfaces are used inside the kernel.  For example,
> > it might be easy enough to wrap the two systems inside a common API
> > inside linux.   Again, following how grant mapping works seems like
> > the way forward.
> > 
> 
> So Konrad, do you have any insight here? :-)

For grants we end up making the 'struct page' for said grant be visible
in our linear space. We stash the original BFNs(MFN) in the 'struct page'
and replace the P2M in PV guests with the new BFN(MFN). David and Jenniefer
is working on making this more lightweight.

How often do we these updates? We could also do simpler way - which is
what backend drivers do - is to get a swath of vmalloc memory and hooking
the BFNs to it.  That can stay for quite some time.

The neat thing about vmalloc is that it is an sliding-window
type mechanism to deal with memory that is not usually accessed via
linear page tables. 

I suppose the complexity behind this is that this 'window' at the GPU
page tables needs to change. As in it moves around as there are different
guests doing things. So the mechanism of swapping this 'window' is going
to be expensive to map/unmap (as you have to flush the TLBs in the 
initial domain for the page-tables - unless you have multiple
'windows' and we flush the olders ones lazily? But that sounds complex).

Who is doing the audit/modification ? Is it some application in the
initial domain (backend) domain or some driver in the kernel?

> 
> Thanks
> Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.