|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 07/14] xen/riscv: introduce exception context
On 30/01/2023 10:44 pm, Julien Grall wrote:
> Hi Jan,
>
> On 30/01/2023 13:50, Jan Beulich wrote:
>> On 30.01.2023 12:54, Oleksii wrote:
>>> Hi Jan,
>>>
>>> On Fri, 2023-01-27 at 15:24 +0100, Jan Beulich wrote:
>>>> On 27.01.2023 14:59, Oleksii Kurochko wrote:
>>>>> --- /dev/null
>>>>> +++ b/xen/arch/riscv/include/asm/processor.h
>>>>> @@ -0,0 +1,82 @@
>>>>> +/* SPDX-License-Identifier: MIT */
>>>>> +/*****************************************************************
>>>>> *************
>>>>> + *
>>>>> + * Copyright 2019 (C) Alistair Francis <alistair.francis@xxxxxxx>
>>>>> + * Copyright 2021 (C) Bobby Eshleman <bobby.eshleman@xxxxxxxxx>
>>>>> + * Copyright 2023 (C) Vates
>>>>> + *
>>>>> + */
>>>>> +
>>>>> +#ifndef _ASM_RISCV_PROCESSOR_H
>>>>> +#define _ASM_RISCV_PROCESSOR_H
>>>>> +
>>>>> +#ifndef __ASSEMBLY__
>>>>> +
>>>>> +/* On stack VCPU state */
>>>>> +struct cpu_user_regs
>>>>> +{
>>>>> + unsigned long zero;
>>>>> + unsigned long ra;
>>>>> + unsigned long sp;
>>>>> + unsigned long gp;
>>>>> + unsigned long tp;
>>>>> + unsigned long t0;
>>>>> + unsigned long t1;
>>>>> + unsigned long t2;
>>>>> + unsigned long s0;
>>>>> + unsigned long s1;
>>>>> + unsigned long a0;
>>>>> + unsigned long a1;
>>>>> + unsigned long a2;
>>>>> + unsigned long a3;
>>>>> + unsigned long a4;
>>>>> + unsigned long a5;
>>>>> + unsigned long a6;
>>>>> + unsigned long a7;
>>>>> + unsigned long s2;
>>>>> + unsigned long s3;
>>>>> + unsigned long s4;
>>>>> + unsigned long s5;
>>>>> + unsigned long s6;
>>>>> + unsigned long s7;
>>>>> + unsigned long s8;
>>>>> + unsigned long s9;
>>>>> + unsigned long s10;
>>>>> + unsigned long s11;
>>>>> + unsigned long t3;
>>>>> + unsigned long t4;
>>>>> + unsigned long t5;
>>>>> + unsigned long t6;
>>>>> + unsigned long sepc;
>>>>> + unsigned long sstatus;
>>>>> + /* pointer to previous stack_cpu_regs */
>>>>> + unsigned long pregs;
>>>>> +};
>>>>
>>>> Just to restate what I said on the earlier version: We have a struct
>>>> of
>>>> this name in the public interface for x86. Besides to confusion about
>>>> re-using the name for something private, I'd still like to understand
>>>> what the public interface plans are. This is specifically because I
>>>> think it would be better to re-use suitable public interface structs
>>>> internally where possible. But that of course requires spelling out
>>>> such parts of the public interface first.
>>>>
>>> I am not sure that I get you here.
>>> I greped a little bit and found out that each architecture declares
>>> this structure inside arch-specific folder.
>>>
>>> Mostly the usage of the cpu_user_regs is to save/restore current state
>>> of CPU during traps ( exceptions/interrupts ) and context_switch().
>>
>> Arm effectively duplicates the public interface struct
>> vcpu_guest_core_regs
>> and the internal struct cpu_user_regs (and this goes as far as also
>> duplicating the __DECL_REG() helper). Personally I find such duplication
>> odd at the first glance at least; maybe there is a specific reason
>> for this
>> in Arm. But whether the public interface struct can be re-used can
>> likely
>> only be known once it was spelled out.
>
> There are some force padding, a different ordering and some extra
> fields in the internal version to simplify the assembly code.
>
> The rationale is explained in 1c38a1e937d3 ("xen: arm: separate guest
> user regs from internal guest state").
>
> We also have a split between 32-bit and 64-bit to avoid doubling up
> the size in the former case.
And on top of these reasons, I feel I need to remind you that we still
need to break these apart on x86 to fix a stack OoB access in the #DF
handler, and to fix stack alignment for UEFI, and to remove the relics
of vm86 mode, and not to mention adding support for FRED.
It was a fundamental design error that Xen's internal representation
ever made it into the public API, and it absolutely does not want
repeating again.
~Andrew
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |