[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8] x86/altp2m: support for setting restrictions for an array of pages

>>> On 11.12.17 at 12:06, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 11/12/17 09:14, Jan Beulich wrote:
>>>>> On 08.12.17 at 13:42, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> On 12/08/2017 02:18 PM, Jan Beulich wrote:
>>>>>>> On 24.10.17 at 12:19, <ppircalabu@xxxxxxxxxxxxxxx> wrote:
>>>>> HVMOP_altp2m_set_mem_access_multi has been added as a HVMOP (as opposed 
>>>>> to a
>>>>> DOMCTL) for consistency with its HVMOP_altp2m_set_mem_access counterpart 
> (and
>>>>> hence with the original altp2m design, where domains are allowed - with 
>>>>> the
>>>>> proper altp2m access rights - to alter these settings), in the absence of 
>>>>> an
>>>>> official position on the issue from the original altp2m designers.
>>>> I continue to disagree with this reasoning. I'm afraid I'm not really
>>>> willing to allow widening the badness, unless altp2m was formally
>>>> documented security-unsupported.
>>> Going the DOMCTL route here would have been the (much easier) solution,
>>> and in fact, as stated before, there has been an attempt to do so -
>>> however, IIRC Andrew has insisted that we should take care to use
>>> consistent access privilege across altp2m operations.
>> Andrew, is that the case (I don't recall anything like that)?
> My suggestion was that we don't break usecases.  The Intel usecase
> specifically is for an in-guest entity to have full control of all
> altp2m functionality, and this is fine (security wise) when permitted to
> do so by the toolstack.

IOW you mean that such guests would be considered "trusted", i.e.
whatever bad they can do is by definition not a security concern.
If so, that's fine of course, provided the default mode is secure
(which it looks like it is by virtue of altp2m being disabled altogether
by default). Yet I don't think I know of a place where this is being
spelled out.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.