|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] x86/emul: Remove fallback path from SWAPGS
On 07/04/2026 5:00 pm, Teddy Astie wrote:
> Le 07/04/2026 à 16:27, Andrew Cooper a écrit :
>> In real hardware, accesses to the registers cannot fail. The error paths are
>> just an artefact of the hook functions needing to return something.
>>
>> The best effort unwind is also something that doesn't exist in real hardware,
>> and makes the logic more complicated to follow. Instead, use an
>> ASSERT_UNREACHABLE() with a fallback of injecting #DF. Hitting this path is
>> an error in Xen.
>>
>> While adjusting, remove {read,write}_segment() and use {read,write}_msr() to
>> access MSR_GS_BASE. There's no need to access the other parts of the GS
>> segment, and this is less work behind the scenes.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> ---
>> CC: Jan Beulich <JBeulich@xxxxxxxx>
>> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>
>> v2:
>> * Retain x86_emul_reset_event()
>> * Pass an error code for #DF
>> * Drop goto done now that generate_exception() is used
>> * Use 2x{read,write}_msr()
>>
>> Tested using LKGS's extention of the test emulator for SWAPGS.
>> ---
>> xen/arch/x86/x86_emulate/0f01.c | 28 +++++++++++++++-------------
>> 1 file changed, 15 insertions(+), 13 deletions(-)
>>
>> diff --git a/xen/arch/x86/x86_emulate/0f01.c
>> b/xen/arch/x86/x86_emulate/0f01.c
>> index 6c10979dd650..54bd6faf0f2c 100644
>> --- a/xen/arch/x86/x86_emulate/0f01.c
>> +++ b/xen/arch/x86/x86_emulate/0f01.c
>> @@ -189,22 +189,24 @@ int x86emul_0f01(struct x86_emulate_state *s,
>> generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
>> fail_if(!ops->read_segment || !ops->read_msr ||
>> !ops->write_segment || !ops->write_msr);
> Do we still need checks for ops->{read,write}_segment if we're not using
> them anymore ?
Oh, yes they can be dropped now.
Please send a new patch. I've already committed this to unblock some of
Jan's work.
~Andrew
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |