[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Security Advisory 154 (CVE-2016-2270) - x86: inconsistent cachability flags on guest mappings



>>> On 25.01.17 at 15:08, <dwmw2@xxxxxxxxxxxxx> wrote:
>> --- a/xen/arch/x86/hvm/mtrr.c
>> +++ b/xen/arch/x86/hvm/mtrr.c
>> @@ -770,8 +770,17 @@ int epte_get_entry_emt(struct domain *d,
>>      if ( v->domain != d )
>>          v = d->vcpu ? d->vcpu[0] : NULL;
>>  
>> -    if ( !mfn_valid(mfn_x(mfn)) )
>> +    if ( !mfn_valid(mfn_x(mfn)) ||
>> +         rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn),
>> +                                 mfn_x(mfn) + (1UL < order) - 1) )
>> +    {
>> +        *ipat = 1;
>>          return MTRR_TYPE_UNCACHABLE;
>> +    }
>> +
>> +    if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn),
>> +                                 mfn_x(mfn) + (1UL < order) - 1) )
>> +        return -1;
>>  
>>      switch ( hvm_get_mem_pinned_cacheattr(d, gfn, order, &type) )
>>      {
> 
> This doesn't look right. That second 'if(rangeset_overlaps_range(…))'
> is tautologically false, because if it *is* true, the first if()
> statement happens first and it's never reached.

Note the difference between "contains" and "overlaps".

> The reason I'm looking is because that first if() statement is
> happening for MMIO regions where it probably shouldn't. This means that
> guests are mapping MMIO BARs of assigned devices and getting *forced*
> UC (because *ipat=1) instead of taking the if(direct_mmio) path
> slightly further down — which wouldn't set the 'ignore PAT' bit, and
> would thus allow them to enable WC through their PAT.
> 
> It makes me wonder if the first was actually intended to be
> '!mfn_valid() && rangeset_contains_range(…)' — with logical && rather
> than ||. That would make some sense because it's then explicitly
> refusing to map pages which are in mmio_ro_ranges *and* mfn_valid().

No, this surely wasn't the intention. As Andrew has tried to
explain on irc, the only valid implication is !mfn_valid() -> MMIO.

> And then there's a 'if (direct_mmio) return UC;' further down which
> looks like it'd do the right thing for the use case I'm actually
> testing. I may see if I can construct a straw man patch, but I'm kind
> of unfamiliar with this code so it should be taken with a large pinch
> of salt...

If there wasn't INVALID_MFN to be taken care of, the !mfn_valid()
check could simply move down, and be combined with the
direct_mmio one.

> There is a separate question of whether it's actually safe to let the
> guest map an MMIO page with both UC and WC simultaneously. Empirically,
> it seems to be OK — I hacked a guest kernel not to enforce the mutual
> exclusion, mapped the BAR with both UC and WC and ran two kernel
> threads, reading and writing the whole BAR in a number of iterations.
> The WC thread went a lot faster than the UC one, so it will have often
> been touching the same locations as the UC thread as it 'overtook' it,
> and nothing bad happened. This seems reasonable, as the dire warnings
> and machine checks are more about *cached* vs. uncached mappings, not
> WC vs. UC. But it would be good to have a definitive answer from Intel
> and AMD about whether it's safe.

Well, in the context of this XSA we've asked both of them, and iirc
we've got a vague reply from Intel and none from AMD. In fact we
did defer the XSA for quite a bit waiting for any useful feedback.
To AMD's advantage I'd like to add though that iirc they're a little
more clear in their PM about the specific question of UC and WC
you raise: They group the various cacheabilities into two groups
(cacheable and uncacheable) and require there to only not be
any mixture between groups. Iirc Intel's somewhat vague reply
allowed us to conclude we're likely safe that way on their side too.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.