[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] x86: Meltdown band-aid against malicious 64-bit PV guests



>>> On 16.01.18 at 12:51, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 16/01/18 07:46, Jan Beulich wrote:
>>>>> On 15.01.18 at 19:23, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> Also, can we collect these together into macros, rather than
>>> opencoding?  We seem to have 3 distinct variations.
>> I had considered that (following the model you use in the SP2
>> series), but decided against it not the least because of the
>> dependent but placement-wise separated code additions to
>> restore original values. Plus again this might be a hindrance of
>> backporting to really old Xen (which then typically will also be
>> built on really old tool chains) - as you certainly recall old gas
>> had quite a few issues with macro handling.
> 
> There is nothing special in these macros though?  I found the SP2-style
> far easier to backport because they are a single slot-in line.

I've just found the patch here needing a change in register use
in 4.7 - such would be a little harder with pre-cooked macros,
especially when they don't allow to specify all registers to be
used (including ones for temporaries).

> Anyway, I'm not overly fussed, but I have a

Unfinished sentence?

>>>> @@ -71,6 +97,18 @@ iret_exit_to_guest:
>>>>          ALIGN
>>>>  /* No special register assumptions. */
>>>>  restore_all_xen:
>>>> +        /*
>>>> +         * Check whether we need to switch to the per-CPU page tables, in
>>>> +         * case we return to late PV exit code (from an NMI or #MC).
>>>> +         */
>>>> +        GET_STACK_END(ax)
>>>> +        mov   STACK_CPUINFO_FIELD(xen_cr3)(%rax), %rdx
>>>> +        mov   STACK_CPUINFO_FIELD(pv_cr3)(%rax), %rax
>>>> +        test  %rdx, %rdx
>>>> +UNLIKELY_START(g, exit_cr3)
>>> cmp or ne ?
>> "ne" (or really "nz" when used with "test") is outright wrong - we
>> want to skip the restore when the value is zero _or_ negative.
>> What's wrong with "jg" and "test" in combination? There simply is
>> no "jnsz" (other than e.g. "jnbe"). "cmp" against zero could be
>> used here, but why would I use the larger instruction when "test"
>> does?
> 
> greater or less than is not commonly related to the test instruction,
> which is why this looks wrong to a reviewer.
> 
> A comment of /* Ideally jnsz, but jg will have to do */ would go a very
> long way.

Added.

> I've double checked the logic and I agree with your conclusions, but the
> only reason this works is because test unconditionally zeroes the
> overflow flag.

Because, well, there is no overflow possible for AND (and hence
TEST), OR, and XOR.

>>>> @@ -585,6 +692,17 @@ ENTRY(double_fault)
>>>>          movl  $TRAP_double_fault,4(%rsp)
>>>>          /* Set AC to reduce chance of further SMAP faults */
>>>>          SAVE_ALL STAC
>>>> +
>>>> +        GET_STACK_END(bx)
>>>> +        mov   STACK_CPUINFO_FIELD(xen_cr3)(%rbx), %rbx
>>>> +        test  %rbx, %rbx
>>>> +        jz    .Ldblf_cr3_okay
>>>> +        jns   .Ldblf_cr3_load
>>>> +        neg   %rbx
>>>> +.Ldblf_cr3_load:
>>>> +        write_cr3 rbx, rdi, rsi
>>>> +.Ldblf_cr3_okay:
>>>> +
>>> It is moderately common for there to be cascade faults in #DF.  This
>>> would be better if it were the general IST switch.
>> Based on the issues I had with #DF occurring while debugging this,
>> I've decided to keep the code here as simple as possible without
>> being incorrect: There's no point looking at the incoming CR3.
>> There's no point in trying to avoid nested faults (including
>> subsequent #DF) restoring CR3. There's also no point in retaining
>> the value for later restoring here, as we never return. In fact, as
>> mentioned elsewhere, we should imo indeed consider unilaterally
>> switching to idle_pg_table[] here.
> 
> Ok.

"Ok" to which parts of this - keeping the code simple, switching to
idle_pg_table[] (perhaps in a follow-up patch), or both?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.