[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFD] OP-TEE (and probably other TEEs) support



On 29/11/16 19:19, Volodymyr Babchuk wrote:
Hi Julien,

Hi Volodymyr,



On 29 November 2016 at 20:55, Julien Grall <julien.grall@xxxxxxx> wrote:
Hi Volodymyr,

On 29/11/16 17:40, Volodymyr Babchuk wrote:

On 29 November 2016 at 18:02, Julien Grall <julien.grall@xxxxxxx> wrote:

On 29/11/16 15:27, Volodymyr Babchuk wrote:

On 28 November 2016 at 22:10, Julien Grall <julien.grall@xxxxxxx> wrote:

On 28/11/16 18:09, Volodymyr Babchuk wrote:

On 28 November 2016 at 18:14, Julien Grall <julien.grall@xxxxxxx>
wrote:

On 24/11/16 21:10, Volodymyr Babchuk wrote:

I don't follow your point here. Why would the SMC handler need to map the
guest memory?

Because this is how parameters are passed. We can pass some parameters
in registers, but for example in OP-TEE registers hold only address of
command buffer. In this command buffer there are actual parameters.
Some of those parameters can be references to another memory objects.
So, to translate IPAs to PAs we need to map this command buffer,
analyze it and so on.


So the SMC issued will contain a PA of a page belonging to the guest or Xen?
It will be guest page. But all references to other pages will have
real PAs, so TEE can work with them.

Lets dive into example: hypervisor traps SMC, mediation layer (see
below) can see that there was INVOKE_COMMAND request. There are
address of command buffer in register pair (r1, r2). Mediation layer
changes address in this register pair to real PA of the command
buffer. Then it maps specified page and checks parameters. One of
parameters have type MEMREF, so mediation layer has to change IPA of
specified buffer to PA. Then it issues real SMC call.
After return from SMC it inspects registers and buffer again and
replaces memory references back.

I was about to ask whether SMC call have some kind of metadata to know the parameter, but you answered it on another mail. So I will follow-up there.

Regarding the rest, you said that the buffer passed to the real TEE will be baked into guest memory. There are few problems with that you don't seem to address in this design document: - The buffer may be contiguous in the IPA space but discontinuous in PA space. This is because Xen may not be able to allocate all the memory for the guest contiguously in PA space. So how do you plan to handle buffer greater than a Xen page granularity (i.e 4K) - Can all type memory could be passed to TEE (e.g foreign page, grant, mmio...)? I suspect not. - TEE may run in parallel of the guest OS, this means that we have to make sure the page will never be removed by the guest OS (see XENMEM_decrease_reservation hypercall). - The IPA -> PA translation can be slow as this would need to be done in software (see p2m_lookup). Is there any upper limit of the number of buffer and indirection available?

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.