[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/5] xen/arm: optee: impose limit on shared buffer size



Hi,

On 9/17/19 1:28 PM, Volodymyr Babchuk wrote:

Hi Julien,


Julien Grall writes:

Hi Volodymyr,

On 9/16/19 4:26 PM, Volodymyr Babchuk wrote:

Hi Julien,

Julien Grall writes:

Hi,

On 9/12/19 8:45 PM, Volodymyr Babchuk wrote:

Hi Julien,

Julien Grall writes:

Hi Volodymyr,

On 9/11/19 7:48 PM, Volodymyr Babchuk wrote:

Julien Grall writes:

Hi Volodymyr,

On 8/23/19 7:48 PM, Volodymyr Babchuk wrote:
We want to limit number of calls to lookup_and_pin_guest_ram_addr()
per one request. There are two ways to do this: either preempt
translate_noncontig() or to limit size of one shared buffer size.

It is quite hard to preempt translate_noncontig(), because it is deep
nested. So we chose second option. We will allow 512 pages per one
shared buffer. This does not interfere with GP standard, as it
requires that size limit for shared buffer should be at lest 512kB.

Do you mean "least" instead of "lest"?
Yes

If so, why 512 pages (i.e 1MB)
I have missed that earlier. But 512 pages is 2MB, actually.

is plenty enough for most of the use cases? What does "xtest" consist
on?
Bigger buffer xtest tries to allocate is mere 32KB. I believe that 1MB
is enough for the most cases, because OP-TEE itself have a very limited
resources. But this value is chosen arbitrary.

Could we potentially reduce to let say 512KB (or maybe lower) if xtest
only allocate 32KB?
Potentially - yes. But only to 512KB if we want to be compatible with
the Global Platform specification. Why are you asking, though?

Does the Global Platform specification limit to 512KB? Or is it a minimum?
GP Spec says, that platform should allow *at lest* 512KB. Upper limit is
not set.

Because, the smaller the buffer is, the less time it will take to
process in the worst case. Also, if we can have a reason for the size
(you seem to suggest the spec define a size...) then it is much better
than a random value.
I have no strong arguments here, but I want to allow the biggest size
possible. It seems, that 512 pages is the accepted limit in hypervisor
code (at least, in p2m.c), so I chose this value.

If GP specific request at least 512KB, then any portable code should
be able to deal with 512KB, right? So why would you allow more? What
are the cons/pros?
Yes, any portable code should work with 512KB. I want to allow bigger
buffers for two reasons: on OP-TEE issues tracking people often ask how
to increase various memory limits across OP-TEE. So, apparently people
sometimes wants bigger buffers. Second reasons is the non-portable
things like Secure Data Path. For SDP one wants to pass media (like
audio and video) data to and from OP-TEE. Media requires big buffers.

But what's the limit in that case? Would 2MB really be sufficient for them?


Anyways, I can see that 512 pages are established limit in the p2m
code. So, why do you want OP-TEE mediator to have smaller limit?

One limit doesn't rule them all. ;)

The preemption is slightly more elaborate than checking the preemption every 512 pages. For instance in p2m_cache_flush_range(), we take into account how much is done.

In your case, you also have to factor how long the smc call is going to take in your calculation. This is not an exact science, but the fact that at the moment the limit for OP-TEE is much lower lead to me to think a smaller limit is better. Most likely OP-TEE will deny it anyway...


I want to be straight there: 512KB will likely work for most of the
users. But there are always users who want more. So I would like to set
largest plausible limit just in case.
I am pretty unconvinced that an higher value than what the spec request is right. But I don't have more ground than my gut feeling (I am always on the safe side). You are the maintainer of that code, so I am not going to push more for a lower value.

However, we should at least document any reasoning because this does not seem to be a that random value anymore.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.