[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.





On 03/08/16 17:42, Tamas K Lengyel wrote:
On Wed, Aug 3, 2016 at 10:24 AM, Julien Grall <julien.grall@xxxxxxx> wrote:
Hi Tamas,


On 03/08/16 17:01, Tamas K Lengyel wrote:

On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@xxxxxxx> wrote:

Hello Sergej,

Please try to reply to all when answering on the ML. Otherwise the answer
may be delayed/lost.

On 03/08/16 13:45, Sergej Proskurin wrote:


The interesting part about #VE is that it allows to handle certain
violations (currently limited to EPT violations -- future
implementations might introduce also further violations) inside of the
guest, without the need to explicitly trap into the VMM. Thus, #VE allow
switching of different memory views in-guest. Because of this, I also
agree that event channels would suffice in our case, since we do not
have sufficient hardware support on ARM and would need to trap into the
VMM anyway.



The cost of doing an hypercall on ARM is very small compare to x86 (~1/3
of
the number of x86 cycles) because we don't have to save all the state
every
time. So I am not convinced by the argument of limiting the number of
trap
to the hypervisor and allow a guest to play with altp2m on ARM.

I will have to see a concrete example before going forward with the event
channel.


It is out-of-scope for what we are trying to achieve with this series
at this point. The question at hand is really whether the atp2m switch
and gfn remapping ops should be exposed to the guest. Without #VE -
which we are not implementing - setting the mem_access settings from
within the guest doesn't make sense so restricting access there is
reasonable.

As I outlined, the switch and gfn remapping can have legitimate
use-cases by themselves without any mem_access bits involved. However,
it is not our use-case so we have no problem restricting access there
either. So the question is whether that's the right path to take here.
At this point I'm not sure there is agreement about it or not.


Could you give a legitimate use case of gfn remapping from the guest? And
explain how it would work with only this patch series.

From my perspective, and after the numerous exchange in this thread, I do
not think it is wise to expose this interface to the guest on ARM. The usage
is very limited but increase the surface attack. So I will not ack a such
choice, however I will not nack it.


Since the interface would be available only for domains where they
were explicitly created with altp2m=1 flag set I think the exposure is
minimal.

As for a use-case, I don't have a real world example as it's not how
we use the system. But as I pointed out eairlier I could imagine the
gfn remapping be used to protect kernel memory areas against
information disclosure by only switching to the accessible altp2m view
when certain conditions are met. What I mean is that a certain gfn
could be remapped to a dummy mfn by default and only switched to the
accessible view when necessary. How much extra protection that would
add and under what condition is up for debate but IMHO it is a
legitimate experimental use - and altp2m is an experimental system.

A such solution may give you a lots of headache with the cache.


Whether it's worth to have such an interface or not I'm not sure, I'm
OK with going either way on this, but since it's available on x86 I
think it would make sense to have feature parity - even if only
partially for now.

As I mentioned a couple of times, we do not introduce features on ARM just because they exists on x86. We introduce them after careful think about how they could benefits ARM and the usage.

Nothing prevents a follow-up series to allow the guest accessing altp2m operation by default because the interface is already there.

Stefano, do you have any opinions on this?

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.