[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] xen/memory: Introduce a hypercall to provide unallocated space



On 8/6/21 2:09 AM, Jan Beulich wrote:
> On 05.08.2021 18:37, Daniel P. Smith wrote:
>> On 8/5/21 11:59 AM, Oleksandr wrote:
>>> On 05.08.21 18:03, Daniel P. Smith wrote:
>>>> On 7/28/21 12:18 PM, Oleksandr Tyshchenko wrote:
>>>>> --- a/xen/common/memory.c
>>>>> +++ b/xen/common/memory.c
>>>>> @@ -1811,6 +1811,62 @@ long do_memory_op(unsigned long cmd,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>               start_extent);
>>>>>           break;
>>>>>   +    case XENMEM_get_unallocated_space:
>>>>> +    {
>>>>> +        struct xen_get_unallocated_space xgus;
>>>>> +        struct xen_unallocated_region *regions;
>>>>> +
>>>>> +        if ( unlikely(start_extent) )
>>>>> +            return -EINVAL;
>>>>> +
>>>>> +        if ( copy_from_guest(&xgus, arg, 1) ||
>>>>> +             !guest_handle_okay(xgus.buffer, xgus.nr_regions) )
>>>>> +            return -EFAULT;
>>>>> +
>>>>> +        d = rcu_lock_domain_by_any_id(xgus.domid);
>>>>> +        if ( d == NULL )
>>>>> +            return -ESRCH;
>>>>> +
>>>>> +        rc = xsm_get_unallocated_space(XSM_HOOK, d);
>>>> Not sure if you are aware but XSM_HOOK is a no-op check, meaning that
>>>> you are allowing any domain to do this operation on any other domain. In
>>>> most cases there is an XSM check at the beginning of the hypercall
>>>> processing to do an initial clamp down but I am pretty sure there is no
>>>> prior XSM check on this path. Based on my understanding of how this is
>>>> intended, which may be incorrect, but I think you would actually want
>>>> XSM_TARGET.the
>>> Thank you for pointing this out.
>>> I am aware what the XSM_HOOK is, but I was thinking what the default
>>> action would be better suited for that hypercall, and failed to think of
>>> a better alternative.
>>> I was going to choose XSM_TARGET, but the description "/* Can perform on
>>> self or your target domain */" confused me a bit, as there was no target
>>> domain involved as I thought, XSM_PRIV
>>> sounded too strictly to me, etc. So, I decided to leave a "hook" for the
>>> RFC version. But, now I see that XSM_TARGET might be indeed better
>>> choice across all possible variants.
>>
>> If you unravel the craftiness that is xsm_default_action, there is
>> actually a bit of hierarchy there. If you set the default_action to
>> XSM_TARGET, it will first check if calling domain(src) is the target,
>> then falls into the XSM_DM_PRIV check which is if src->target == target,
>> and then finally checks if is_control_domain(src). That will constrict
>> the operation so that a domain can call it on itself, a device model
>> domain (stubdom) can call it on the domain it is backing, and the
>> control domain can make the call. I am not a 100% sure on this but I do
>> not believe a hardware domain would be able to make the call with it set
>> to XSM_TARGET and not employing Flask.
> 
> Afaict (perhaps leaving aside late-hwdom, which I have little knowledge
> of) right now we have is_control_domain(d) == is_hardware_domain(d).

That is my fault for not being more explicit. When I refer to a
"hardware domain" I am referring to what you call "late-hwdom". When a
hardware domain that is not dom0 is constructed, it does not get the
`is_privileged` flag set to true which will result in
is_control_domain(d) to return false. Additionally there is currently no
`enum xsm_default` for hardware domain access/privilege and thus there
is no rule/access check defined in `default_action()` that allows any
of the XSM hooks to be restricted to the hardware domain. Which is what
I was referring to regarding the use of the hardware domain, aka
late-hwdom, without Flask. With Flask it becomes possible for the
hardware domain to be granted access to calls that are reserved to the
control domain under the dummy/default access policy,thus allowing it to
function fully.

dps



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.