[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: Fix XSM build following c/s 92942fd



On 11/02/16 00:15, Daniel De Graaf wrote:
> On 10/02/16 05:47, Jan Beulich wrote:
>>>>> On 10.02.16 at 11:39, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 09/02/16 17:05, Jan Beulich wrote:
>>>>>>> On 09.02.16 at 17:21, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>> I'm sorry for the breakage / not noticing.
>>>>
>>>>> ---
>>>>> CC: Jan Beulich <JBeulich@xxxxxxxx>
>>>>> CC: Tim Deegan <tim@xxxxxxx>
>>>>> CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
>>>>> CC: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
>>>>>
>>>>> Is this actually an appropraite fix?  Software relying on -ENOSYS
>>>>> for Xen
>>>>> feature detection is going to break when running under an XSM
>>>>> hypervisor.
>>>> That's a valid concern, and it's certainly not nice for XSM to need
>>>> tweaking here at all. Perhaps ...
>>>>
>>>>> --- a/xen/xsm/flask/hooks.c
>>>>> +++ b/xen/xsm/flask/hooks.c
>>>>> @@ -1421,7 +1421,6 @@ static int flask_shadow_control(struct
>>>>> domain *d,
>>> uint32_t op)
>>>>>           break;
>>>>>       case XEN_DOMCTL_SHADOW_OP_ENABLE:
>>>>>       case XEN_DOMCTL_SHADOW_OP_ENABLE_TEST:
>>>>> -    case XEN_DOMCTL_SHADOW_OP_ENABLE_TRANSLATE:
>>>>>       case XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION:
>>>>>       case XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION:
>>>>>           perm = SHADOW__ENABLE;
>>>> ... rather than just removing the case it should be moved to a
>>>> separate case yielding -ENOSYS or -EOPNOTSUPP (right now
>>>> shadow_domctl() returns -EINVAL anyway afaics)? (This of
>>>> course would mean that we can't completely suppress the
>>>> XEN_DOMCTL_SHADOW_OP_ENABLE_TRANSLATE #define in
>>>> public/domctl.h. We could limit it to __XEN__ though.)
>>>
>>> The problem this creates is that we gain two locations prescribing the
>>> authoritative set of supported actions, which is one too many.
>>>
>>> The first question is whether blanket -EPERMs are ok.  This is the
>>> current behaviour, and as such, this patch should probably be taken in
>>> this form.  (i.e. fix the build and maintain the status quo).
>>
>> I'm fine with that, and I think I'm not going to wait for Daniel's ack
>> in this obvious case.
>
> That's fine; it seems an obvious fix anyway.
>
>>
>>> If blanket -EPERMs are not ok, we are going to need some longer work to
>>> make the XSM checks finer grained, and after the primary -ENOSYS
>>> checks.
>>
>> Daniel?
>
> The blanket -EPERM is there to catch adding new sub-operations that
> would otherwise be allowed without any access check.  The same is true
> for most of the other switch statements in flask/hooks.c, although
> they tend to also have an error message.
>
> The alternatives to the -EPERM return that I can think of are:
> 1. Change the default -EPERM to a return 0.  This requires being more
> careful when adding sub-operations to ensure that they are protected
> by access control.
> 2. Change the default -EPERM to -ENOSYS.  This feels like a layering
> violation to me, but it would make the error correct.
> 3. Break the xsm_shadow_control hook into 3 hooks, one per permission,
> and invoke them before taking actions instead of a blanket per-op.
> 4. Do -ENOSYS checking prior to XSM checking.
>
> Any of them work, it really depends on what would be easiest to
> maintain and provides the sanest interface.  I don't have a
> real preference for any of them over the current situation.
>

Hmm - none of these are ideal.  2 is a layering violation, and all the
others lead to the same increased risk of accidentally missing the check
on certain subops paths.

In particular, the -ENOSYS checking becomes harder when we have nested
decode functions (e.g. arch_domctl() called from the default case in
do_domctl())

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.