[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.




On 08/03/2016 02:18 PM, Andrew Cooper wrote:
> On 03/08/16 13:13, Julien Grall wrote:
>>
>> On 03/08/16 13:00, Andrew Cooper wrote:
>>> On 03/08/16 12:53, Julien Grall wrote:
>>>> On 02/08/16 17:08, Andrew Cooper wrote:
>>>>> On 02/08/16 08:34, Julien Grall wrote:
>>>>>> Hi Andrew,
>>>>>>
>>>>>> On 02/08/2016 00:14, Andrew Cooper wrote:
>>>>>>> On 01/08/2016 19:15, Julien Grall wrote:
>>>>>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>>>>> Hello all,
>>>>>>>> Hello Sergej,
>>>>>>>>
>>>>>>>>> The following patch series can be found on Github[0] and is part
>>>>>>>>> of my
>>>>>>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>>>>>>> project is
>>>>>>>>> managed by the organization The Honeynet Project. As part of
>>>>>>>>> GSoC, I
>>>>>>>>> am being
>>>>>>>>> supervised by the Xen developer Tamas K. Lengyel
>>>>>>>>> <tamas@xxxxxxxxxxxxx>, George
>>>>>>>>> D. Webster, and Steven Maresca.
>>>>>>>>>
>>>>>>>>> In this patch series, we provide an implementation of the altp2m
>>>>>>>>> subsystem for
>>>>>>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>>>>>>> providing
>>>>>>>>> additional --alternate-- views on the guest's physical memory by
>>>>>>>>> means of the
>>>>>>>>> ARM 2nd stage translation mechanism. The patches introduce new
>>>>>>>>> HVMOPs
>>>>>>>>> and
>>>>>>>>> extend the p2m subsystem. Also, we extend libxl to support
>>>>>>>>> altp2m on
>>>>>>>>> ARM and
>>>>>>>>> modify xen-access to test the suggested functionality.
>>>>>>>>>
>>>>>>>>> To be more precise, altp2m allows to create and switch to
>>>>>>>>> additional
>>>>>>>>> p2m views
>>>>>>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>>>>>>> activated as
>>>>>>>>> will through the provided HVMOPs. In this way, the active guest
>>>>>>>>> instance in
>>>>>>>>> question can seamlessly proceed execution without noticing that
>>>>>>>>> anything has
>>>>>>>>> changed. The prime scope of application of altp2m is Virtual
>>>>>>>>> Machine
>>>>>>>>> Introspection, where guest systems are analyzed from the
>>>>>>>>> outside of
>>>>>>>>> the VM.
>>>>>>>>>
>>>>>>>>> Altp2m can be activated by means of the guest control parameter
>>>>>>>>> "altp2m" on x86
>>>>>>>>> and ARM architectures.  The altp2m functionality by default can
>>>>>>>>> also
>>>>>>>>> be used
>>>>>>>>> from within the guest by design. For use-cases requiring purely
>>>>>>>>> external access
>>>>>>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>>>>>> As said on the previous version, altp2m operation *should not* be
>>>>>>>> exposed to ARM guest. Any design written for x86 may not fit
>>>>>>>> exactly
>>>>>>>> for ARM (and vice versa), you will need to explain why you think we
>>>>>>>> should follow the same pattern.
>>>>>>> Sorry, but I am going to step in here and disagree.  All the x86
>>>>>>> justifications for altp2m being accessible to guests apply
>>>>>>> equally to
>>>>>>> ARM, as they are hardware independent.
>>>>>>>
>>>>>>> I realise you are maintainer, but the onus is on you to justify why
>>>>>>> the
>>>>>>> behaviour should be different between x86 and ARM, rather than
>>>>>>> simply to
>>>>>>> complain at it being the same.
>>>>>>>
>>>>>>> Naturally, technical issues about the details of the
>>>>>>> implementation, or
>>>>>>> the algorithms etc. are of course fine, but I don't see any
>>>>>>> plausible
>>>>>>> reason why ARM should purposefully different from x86 in terms of
>>>>>>> available functionality, and several good reasons why it should
>>>>>>> be the
>>>>>>> same (least of all, feature parity across architectures).
>>>>>> The question here, is how a guest could take advantage to access to
>>>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>>>> memaccess change, this is not yet the case on ARM.
>>>>> Does ARM have anything like #VE whereby an in-guest entity can receive
>>>>> notification of violations?
>>>> I am not entirely sure what is exactly the #VE. From my understanding,
>>>> it use to report stage 2 violation to the guest, right? If so, I am
>>>> not aware of any.
>>> #VE is a newly specified CPU exception, precisely for reporting state 2
>>> violations (in ARM terminology).  It works very much like a pagefault.
>> Thank you for the explanation. We don't have any specific exception to
>> report stage 2 (I guess EPT for x86 terminology) violations.
> It is currently specific to Intel's EPT implementation, but there is
> nothing in principle preventing AMD reusing the meaning for their NPT
> implementation.
>
>> If the guest physical address does not belong to an emulated device or
>> does not have an associated host address, the hypervisor will inject a
>> data/prefetch abort to the guest.
> This is where x86 and ARM differ quite a bit.  For "areas which don't
> exist", the default is to discard writes and reads return ~0, rather
> than to raise a fault with the software.
>
>> Those aborts contains a fault status. For now it is always the same
>> fault: debug fault on AArch32 and address size fault on AArch64. I
>> don't think we can re-use one of the fault (see ARM D7-1949 in DDI
>> 0487A.j for the list of fault code) to behave as #VE.
>>
>> I guess the best would be an event channel for this purpose.
> Agreed.  If there is no hardware way of doing this, a PV way with event
> channels should work fine.
>
> ~Andrew
>

The interesting part about #VE is that it allows to handle certain
violations (currently limited to EPT violations -- future
implementations might introduce also further violations) inside of the
guest, without the need to explicitly trap into the VMM. Thus, #VE allow
switching of different memory views in-guest. Because of this, I also
agree that event channels would suffice in our case, since we do not
have sufficient hardware support on ARM and would need to trap into the
VMM anyway.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.