[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



On 09/01/15 08:02, Tian, Kevin wrote:
>> From: Tim Deegan [mailto:tim@xxxxxxx]
>> Sent: Thursday, January 08, 2015 8:43 PM
>>
>> Hi,
>>
>>>> Not really.  The IOMMU tables are also 64-bit so there must be enough
>>>> addresses to map all of RAM.  There shouldn't be any need for these
>>>> mappings to be _contiguous_, btw.  You just need to have one free
>>>> address for each mapping.  Again, following how grant maps work, I'd
>>>> imagine that PVH guests will allocate an unused GFN for each mapping
>>>> and do enough bookkeeping to make sure they don't clash with other GFN
>>>> users (grant mapping, ballooning, &c).  PV guests will probably be
>>>> given a BFN by the hypervisor at map time (which will be == MFN in
>>>> practice) and just needs to pass the same BFN to the unmap call later
>>>> (it can store it in the GTT meanwhile).
>>>
>>> if possible prefer to make both consistent, i.e. always finding unused GFN?
>>
>> I don't think it will be possible.  PV domains are already using BFNs
>> supplied by Xen (in fact == MFN) for backend grant mappings, which
>> would conflict with supplying their own for these mappings.  But
>> again, I think the kernel maintainers for Xen may have a better idea
>> of how these interfaces are used inside the kernel.  For example,
>> it might be easy enough to wrap the two systems inside a common API
>> inside linux.   Again, following how grant mapping works seems like
>> the way forward.
>>
> 
> So Konrad, do you have any insight here? :-)

Malcolm took two pages of this notebook explaining to me how he thought
it should work (in combination with his PV IOMMU work), so I'll let him
explain.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.