|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for-4.13 v4 19/19] xen/arm: entry: Ensure the guest state is synced when receiving a vSError
On Thu, 31 Oct 2019, Julien Grall wrote:
> When a SError/Asynchronous Abort generated by the guest has been
> consumed, we will skip the handling of the initial exception.
>
> This includes the calls to enter_hypervisor_from_guest{, _noirq} that
> is used to synchronize part of the guest state with the internal
> representation and re-enable workarounds (e.g. SSBD). However, we still
> call leave_hypervisor_to_guest() which is used for preempting the guest
> and synchronizing back part of the guest state.
>
> enter_hypervisor_from_guest{, _noirq} works in pair with
> leave_hypervisor_to_guest(), so skipping the first two may result
> in a loss of some part of guest state.
>
> An example is the new vGIC which will save the state of the LRs on exit
> from the guest and rewrite all of them on entry to the guest.
>
> A more worrying example is SSBD workaround may not be re-enabled. If
> leave_hypervisor_to_guest() is rescheduling the vCPU, then we may end to
> run a lot of code with SSBD workaroud disabled.
>
> For now, calling leave_hypervisor_to_guest() is not necessary when
> injecting a vSError to the guest. But it would still be good to give an
> opportunity to reschedule. So both enter_hypervisor_from_guest() and
> leave_hypervisor_to_guest() are called.
>
> Note that on arm64, the return value for check_pending_vserror is now
> stored in x19 instead of x0. This is because we want to keep the value
> across call to C-functions (x0, unlike x19, will not be saved by the
> callee).
>
> Take the opportunity to rename check_pending_vserror() to
> check_pending_guest_serror() as the function is dealing with host SError
> and *not* virtual SError. The documentation is also updated accross
> Arm32 and Arm64 to clarify how Xen is dealing with SError generated by
> the guest.
>
> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> ---
>
> Changes in v4:
> - Rewording + typo
>
> Changes in v3:
> - Update comments in the code.
> - Update commit message
> - Add arm32 support
>
> There are two known issues without this patch applied:
> * The state of the vGIC when using the new version may be lost.
> * SSBD workaround may be kept disabled while rescheduling the guest.
> ---
> xen/arch/arm/arm32/entry.S | 57
> ++++++++++++++++++++++++++++++++++++++--------
> xen/arch/arm/arm64/entry.S | 54 ++++++++++++++++++++++++++++++++-----------
> 2 files changed, 88 insertions(+), 23 deletions(-)
>
> diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
> index 34156c4404..b31056a616 100644
> --- a/xen/arch/arm/arm32/entry.S
> +++ b/xen/arch/arm/arm32/entry.S
> @@ -27,6 +27,10 @@
> /*
> * Actions that needs to be done after entering the hypervisor from the
> * guest and before the interrupts are unmasked.
> + *
> + * @return:
> + * r4: Set to a non-zero value if a pending Abort exception took place.
> + * Otherwise, it will be set to zero.
> */
> arch_enter_hypervisor_from_guest_preirq:
> #ifdef CONFIG_ARM32_HARDEN_BRANCH_PREDICTOR
> @@ -56,18 +60,35 @@ arch_enter_hypervisor_from_guest_preirq:
> SAVE_ONE_BANKED(R11_fiq); SAVE_ONE_BANKED(R12_fiq);
>
> /*
> - * If the SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT has been set in the cpu
> - * feature, the checking of pending SErrors will be skipped.
> + * We may have entered the hypervisor with pending asynchronous Abort
> + * generated by the guest. If we need to categorize them, then
> + * we need to consume any outstanding asynchronous Abort.
> + * Otherwise, they can be consumed later on.
> */
> alternative_if SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT
> + mov r4, #0 /* r4 := No Abort was consumed */
> b skip_check
> alternative_else_nop_endif
>
> /*
> - * Start to check pending virtual abort in the gap of Guest -> HYP
> - * world switch.
> + * Consume pending asynchronous Abort generated by the guest if any.
> + *
> + * The only way to consume an Abort interrupt is to unmask it. So
> + * Abort exception will be unmaked for a small window and then masked
> + * it again.
> + *
> + * It is fine to unmask asynchronous Abort exception as we fully
> + * control the state of the processor and only limited code will
> + * be executed if the exception returns (see do_trap_data_abort()).
> *
> - * Save ELR_hyp to check whether the pending virtual abort exception
> + * TODO: The asynchronous abort path should be reworked to
> + * inject the virtual asynchronous Abort in enter_hypervisor_*
> + * rather than do_trap_data_abort(). This should make easier to
> + * understand the path.
> + */
> +
> + /*
> + * save elr_hyp to check whether the pending virtual abort exception
> * takes place while we are doing this trap exception.
> */
> mrs r1, ELR_hyp
> @@ -112,11 +133,11 @@ abort_guest_exit_end:
> cmp r1, r2
>
> /*
> - * Not equal, the pending virtual abort exception took place, the
> - * initial exception does not have any significance to be handled.
> - * Exit ASAP.
> + * Set r4 depending on whether an asynchronous abort were
> + * consumed.
> */
> - bne return_from_trap
> + movne r4, #1
> + moveq r4, #0
>
> skip_check:
> b enter_hypervisor_from_guest_preirq
> @@ -179,12 +200,28 @@ ENDPROC(arch_enter_hypervisor_from_guest_preirq)
>
> 1:
> /* Trap from the guest */
> + /*
> + * arch_enter_hypervisor_from_guest_preirq will return with r4 set to
> + * a non-zero value if an asynchronous Abort was consumed.
> + *
> + * When an asynchronous Abort has been consumed (r4 != 0), we may
> have
> + * injected a virtual asynchronous Abort to the guest.
> + *
> + * In this case, the initial exception will be discarded (PC has
> + * been adjusted by inject_vabt_exception()). However, we still
> + * want to give an opportunity to reschedule the vCPU. So we
> + * only want to skip the handling of the initial exception (i.e.
> + * do_trap_*()).
> + */
> bl arch_enter_hypervisor_from_guest_preirq
> .if \guest_iflags != n
> cpsie \guest_iflags
> .endif
>
> - bl enter_hypervisor_from_guest
> + adr lr, 2f
> + cmp r4, #0
> + adrne lr, return_from_trap
> + b enter_hypervisor_from_guest
>
> 2:
> /* We are ready to handle the trap, setup the registers and jump. */
> diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
> index a8ba7ab961..d35855af96 100644
> --- a/xen/arch/arm/arm64/entry.S
> +++ b/xen/arch/arm/arm64/entry.S
> @@ -184,18 +184,41 @@
> .macro guest_vector compat, iflags, trap, save_x0_x1=1
> entry hyp=0, compat=\compat, save_x0_x1=\save_x0_x1
> /*
> - * The vSError will be checked while
> SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT
> - * is not set. If a vSError took place, the initial exception will be
> - * skipped. Exit ASAP
> + * We may have entered the hypervisor with pending SErrors
> + * generated by the guest. If we need to categorize them, then
> + * we need to check any outstanding SErrors will be consumed.
> + *
> + * The function check_pending_guest_serror() will unmask SError
> + * exception temporarily. This is fine to do before enter_*
> + * helpers are called because we fully control the state of the
> + * processor and only limited code willl be executed (see
> + * do_trap_hyp_serror()).
> + *
> + * When a SError has been consumed (x19 != 0), we may have injected a
> + * virtual SError to the guest.
> + *
> + * In this case, the initial exception will be discarded (PC has
> + * been adjusted by inject_vabt_exception()). However, we still
> + * want to give an opportunity to reschedule the vCPU. So we
> + * only want to skip the handling of the initial exception (i.e.
> + * do_trap_*()).
> + *
> + * TODO: The SErrors path should be reworked to inject the vSError in
> + * enter_hypervisor_* rather than do_trap_hyp_serror. This should
> make
> + * easier to understand the path.
> */
> alternative_if_not SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT
> - bl check_pending_vserror
> - cbnz x0, 1f
> + bl check_pending_guest_serror
> alternative_else_nop_endif
>
> bl enter_hypervisor_from_guest_preirq
> msr daifclr, \iflags
> bl enter_hypervisor_from_guest
> +
> + alternative_if SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT
> + cbnz x19, 1f
> + alternative_else_nop_endif
> +
> mov x0, sp
> bl do_trap_\trap
> 1:
> @@ -436,13 +459,17 @@ return_from_trap:
> eret
>
> /*
> - * This function is used to check pending virtual SError in the gap of
> - * EL1 -> EL2 world switch.
> - * The x0 register will be used to indicate the results of detection.
> - * x0 -- Non-zero indicates a pending virtual SError took place.
> - * x0 -- Zero indicates no pending virtual SError took place.
> + * Consume pending SError generated by the guest if any.
> + *
> + * @return:
> + * x19: Set to a non-zero value if a pending Abort exception took place.
> + * Otherwise, it will be set to zero.
> + *
> + * Without RAS extension, the only way to consume a SError is to unmask
> + * it. So the function will unmask SError exception for a small window and
> + * then mask it again.
> */
> -check_pending_vserror:
> +check_pending_guest_serror:
> /*
> * Save elr_el2 to check whether the pending SError exception takes
> * place while we are doing this sync exception.
> @@ -487,11 +514,12 @@ abort_guest_exit_end:
>
> /*
> * Not equal, the pending SError exception took place, set
> - * x0 to non-zero.
> + * x19 to non-zero.
> */
> - cset x0, ne
> + cset x19, ne
>
> ret
> +ENDPROC(check_pending_guest_serror)
>
> /*
> * Exception vectors.
> --
> 2.11.0
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |