[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/VMX: sanitize VM86 TSS handling

>>> On 13.02.17 at 19:26, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 13/02/17 13:19, Jan Beulich wrote:
>> ---
>> TBD: Do we really want to re-init the TSS every time we are about to
>>      use it? This can happen quite often during boot, especially while
>>      grub is running.
> The only case we need worry about is if the guest manually writes to the
> area covered by the vm86 tss.  I think, looking at the logic, that we
> properly restore the old %tr when re-entering protected mode, so we
> aren't at risk of having the vm86 tss on the leaving-side of a hardware
> task switch.

Yes. But that's the whole point of the question: The guest could
equally well write to the TSS manually right after we've initialized
it. And it only harms itself by doing so, hence we could as well
init the TSS on a less frequently executed path.

> We have lasted thus far without, but given the issues recently
> identified, the only safe conclusion to be drawn is that this
> functionality hasn't been used in anger.
> For sensible performance, we need to keep the IO bitmap clear to avoid
> taking the #GP emulation path.
> For correct behaviour, we need the entire software bitmap to be 0.

This one is just for reasonable performance too, afaict. If faulting,
just like with port accesses, we'd emulate the INT $nn.

>> +void hvm_prepare_vm86_tss(struct vcpu *v, uint32_t base, uint32_t limit)
>> +{
>> +    /*
>> +     * If the provided area is large enough to cover at least the ISA port
>> +     * range, keep the bitmaps outside the base structure. For rather small
>> +     * areas (namely relevant for guests having been migrated from older
>> +     * Xen versions), maximize interrupt vector and port coverage by 
>> pointing
>> +     * the I/O bitmap at 0x20 (which puts the interrupt redirection bitmap
>> +     * right at zero), accepting accesses to port 0x235 (represented by bit 
>> 5
>> +     * of byte 0x46) to trigger #GP (which will simply result in the access
>> +     * being handled by the emulator via a slightly different path than it
>> +     * would be anyway). Be sure to include one extra byte at the end of the
>> +     * I/O bitmap (hence the missing "- 1" in the comparison is not an
>> +     * off-by-one mistake), which we deliberately don't fill with all ones.
>> +     */
>> +    uint16_t iomap = (limit >= sizeof(struct tss32) + (0x100 / 8) + (0x400 
>> / 8)
>> +                      ? sizeof(struct tss32) : 0) + (0x100 / 8);
>> +
>> +    ASSERT(limit >= sizeof(struct tss32) - 1);
>> +    hvm_copy_to_guest_phys(base, NULL, limit + 1, v);
>> +    hvm_copy_to_guest_phys(base + offsetof(struct tss32, iomap),
>> +                           &iomap, sizeof(iomap), v);
> This should be linear rather than phys.

Strictly speaking yes, but since we're running with an ident map,
it doesn't matter. And I'd prefer not to have to introduce a vcpu
parameter to the linear copy function, as that would mean quite
a few changes elsewhere. Would you be okay with me just
attaching a respective comment here?

> In practice it will only make a difference if the guest manages to
> corrupt its IDENT_PT (which is why I suggested that we move the IDENT_PT
> entirely outside of the guest control), but in such a case, we will
> re-enter the guest with the vm86 tss pointing to something other than
> what we have just initialised.

Right, but again the guest would only harm itself.

>> @@ -4484,6 +4511,31 @@ static int hvmop_set_param(
>>          }
>>          d->arch.x87_fip_width = a.value;
>>          break;
>> +
>> +    case HVM_PARAM_VM86_TSS:
>> +        /* Hardware would silently truncate high bits. */
> Does it?  I'd have thought it would be a vmentry failure.

I'm pretty sure I've tried it out, as I didn't have it that way initially.
I think a VM entry failure would result only if the address was


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.