[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



> -----Original Message-----
> From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
> Sent: 09 December 2014 10:11
> To: Paul Durrant; Keir (Xen.org); Tim (Xen.org); JBeulich@xxxxxxxx; Kevin
> Tian; Xen-devel@xxxxxxxxxxxxx
> Subject: One question about the hypercall to translate gfn to mfn.
> 
> Hi all,
> 
>    As you can see, we are pushing our XenGT patches to the upstream. One
> feature we need in xen is to translate guests' gfn to mfn in XenGT dom0
> device model.
> 
>    Here we may have 2 similar solutions:
>    1> Paul told me(and thank you, Paul :)) that there used to be a
> hypercall, XENMEM_translate_gpfn_list, which was removed by Keir in
> commit 2d2f7977a052e655db6748be5dabf5a58f5c5e32, because there was
> no
> usage at that time. So solution 1 is to revert this commit. However,
> since this hypercall was removed ages ago, the reverting met many
> conflicts, i.e. the gmfn_to_mfn is no longer used in x86, etc.
> 
>    2> In our project, we defined a new hypercall
> XENMEM_get_mfn_from_pfn, which has a similar implementation like the
> previous XENMEM_translate_gpfn_list. One of the major differences is
> that this newly defined one is only for x86(called in arch_memory_op),
> so we do not have to worry about the arm side.
> 
>    Does anyone has any suggestions about this?

IIUC what is needed is a means to IOMMU map a gfn in the service domain (dom0 
for the moment) such that it can be accessed by the GPU. I think use of an raw 
mfn value currently works only because dom0 is using a 1:1 IOMMU mapping 
scheme. Is my understanding correct, or do you really need raw mfn values?

  Paul

>    Thanks in advance. :)
> 
> B.R.
> Yu
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.