[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Device model operation hypercall (DMOP, re qemu depriv)



On 03/08/16 15:16, Jan Beulich wrote:
>>>> On 03.08.16 at 15:37, <ian.jackson@xxxxxxxxxxxxx> wrote:
>> Jan Beulich writes ("Re: Device model operation hypercall (DMOP, re qemu 
>> depriv)"):
>>> On 03.08.16 at 12:29, <ian.jackson@xxxxxxxxxxxxx> wrote:
>>>> Does that mean that functionality exposed by all the prooposed HVMCTLs
>>>> is currently available to stubdoms ?
>>>
>>> That series only moves code from one hypercall to another (new) one,
>>> without any security implications at all. What has been available to
>>> stubdoms prior to that series will be available the same way once it
>>> got applied.
>>
>> So what you mean is that in HVMCTL, the privilege check is retained in
>> the individual HVMCTL sub-operation ?  (Ie what used to be IS_PRIV or
>> IS_PRIV_FOR - and implicitly, some sub-ops would be IS_PRIV and some
>> IS_PRIV_FOR.)
>>
>> But looking at your v2 01/11, I see this in do_hvmctl:
>>
>>    +    rc = xsm_hvm_control(XSM_DM_PRIV, d, op.cmd);
>>    +    if ( rc )
>>    +    {
>>    +        rcu_unlock_domain(d);
>>    +        return rc;
>>    +    }
>>
>> And looking at v2 02/11, I didn't see any privilege check in the
>> specific hypercall.
> 
> Of course - there were no explicit IS_PRIV{,_FOR}() uses before
> (i.e. the patch also doesn't remove any), these all sit behind the
> XSM hook. And no, there are no per-sub-op privilege checks,
> they're being consolidated to just the one you quote above.
> 
>> With DMOP it would make sense for the privilege check to be
>> IS_PRIV_FOR, in the DMOP dispatcher.  But it seems that this privilege
>> check already exists in HVMCTL in the right form.
>>
>> So AFAICT HVMCTLs already guarantee not to have an adverse impact on
>> the whole system.  If that weren't the case, then a stub dm could
>> exploit the situation.
> 
> And to tell you the truth, I'm not entirely convinced that all the
> code implementing those operations (be it in there current hvmop
> form or the new hvmctl one - again, the series only moves code
> around) is really safe in this regard. But with there even being
> at least one domctl not on xsm-flask.txt's safe-for-disaggregation
> list, but reportedly used by qemu (I don't recall right now which
> exact one it is), stubdom-s can't be considered fully secure right
> now anyway.
> 
>> Is the audit that is required, to check that the DMOP doesn't have an
>> adverse effect on the _calling domain_ ?  AFAICT most HVMCTLs/DMOPs
>> have /no/ effect on the calling domain, other than as part of argument
>> passing.  So that audit should be easy.
>>
>> So I am confused.  What am I missing ?
> 
> The main adverse effect on the whole system that I can see
> would be long latency operations, but I can't exclude others: Just
> look at the final patch of the series, which fixes a serialization bug
> which I noticed while doing the code movement. I don't think lack
> of proper serialization is guaranteed to only affect the target
> domain.
> 
>>>> Now, there may be other ways to represent/record the security status.
>>>> But it will be necessary to either (i) avoid violating the DMOP
>>>> security promise, by making questionable calls not available via DMOP
>>>> or (ii) trying to retrofit the security promise to DMOP later.
>>>>
>>>> I think (ii) is not a good approach.  It would amount to introducing a
>>>> whole new set of interfaces, and then later trying to redefine them to
>>>> have a particular security property which was not originally there.
>>>
>>> I agree that (i) would be the better approach, but I don't think I
>>> can assess when I would find the time to do the required auditing
>>> of all involved code. Plus I don't see the difference between going
>>> the (ii) route with the presented hvmctl series vs keeping things as
>>> they are (under hvmop) - in both cases the security promise will
>>> need to be retrofit onto existing code.
>>
>> If we don't apply HVMCTL, we can introduce DMOP and then individually
>> move hypercalls from hvmop to DMOP as they are audited.
>>
>> Would a similar approach be acceptable even after HVMCTL ?
>>
>> That is, the following plan:
>>
>> 1. Apply HVMCTL right away.  This solves the cleanup problem,
>>    but leaves the qemu depriv problem unsolved.
>>
>> 2. After the necessary discussion to understand and refine the DMOP
>>    design, create a new DMOP hypercall with appropriate security
>>    promises and whatever stability promises are agreed, but with no
>>    sub-ops.
>>
>> 3. Move each HVMCTL to DMOP, one by one, as it is audited.
>>
>> 4. Perhaps some HVMCTLs will remain which are inherently unsuitable
>>    for use with qemu depriv.  If not, perhaps eventually delete HVMCTL
>>    entirely, or leave it empty (with no subops).
>>
>> This has the downside that we end up moving all the DMOPs twice.  But
>> it does allow us to separate the audit work from the cleanup/reorg.
> 
> Well, moving things twice doesn't sound attractive. I was rather
> thinking of declaring hvmctl (or dmop, if that's to intended name)
> sub-ops secure one by one as they get audited. Part of my reason
> to be hesitant to do such an audit myself (apart from the time
> constraint) is me remembering my failure to do so properly as a
> follow-up to XSA-77 (which needed to be corrected a couple of
> months back), plus not really having got any help with that back
> then (which arguably might end up better here, as this appears to
> be an area of wider interest).

There is certainly a wider interest, and I don't think anyone was
suggesting that you would be doing any of the work we're discussing to
allow qemu depriv.

But for the qemu depriv perspective, we want filtering to happen at
dom0, and we want dom0 to have a very simple rule for knowing whether to
allow a call or not: "Is the hypercall equal to the DMOP hypercall?"

So before qemu devpriv can be usable, *all* the HVMCTL operations would
need to be audited, and those that were deemed insecure would need to be
either fixed or removed.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.