|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 1/8] domain: GADDR based shared guest area registration alternative - teardown
On Wed, May 03, 2023 at 05:54:47PM +0200, Jan Beulich wrote:
> In preparation of the introduction of new vCPU operations allowing to
> register the respective areas (one of the two is x86-specific) by
> guest-physical address, add the necessary domain cleanup hooks.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx>
> ---
> RFC: Zapping the areas in pv_shim_shutdown() may not be strictly
> necessary: Aiui unmap_vcpu_info() is called only because the vCPU
> info area cannot be re-registered. Beyond that I guess the
> assumption is that the areas would only be re-registered as they
> were before. If that's not the case I wonder whether the guest
> handles for both areas shouldn't also be zapped.
IIRC the reason was to unset v->vcpu_info_mfn so that it could be set
again on resume from suspension, or else the hypercall will return an
error.
I guess we should also zap the runstate handlers, or else we might
corrupt guest state.
> ---
> v2: Add assertion in unmap_guest_area().
>
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1019,7 +1019,10 @@ int arch_domain_soft_reset(struct domain
> }
>
> for_each_vcpu ( d, v )
> + {
> set_xen_guest_handle(v->arch.time_info_guest, NULL);
> + unmap_guest_area(v, &v->arch.time_guest_area);
> + }
>
> exit_put_gfn:
> put_gfn(d, gfn_x(gfn));
> @@ -2356,6 +2359,8 @@ int domain_relinquish_resources(struct d
> if ( ret )
> return ret;
>
> + unmap_guest_area(v, &v->arch.time_guest_area);
> +
> vpmu_destroy(v);
> }
>
> --- a/xen/arch/x86/include/asm/domain.h
> +++ b/xen/arch/x86/include/asm/domain.h
> @@ -669,6 +669,7 @@ struct arch_vcpu
>
> /* A secondary copy of the vcpu time info. */
> XEN_GUEST_HANDLE(vcpu_time_info_t) time_info_guest;
> + struct guest_area time_guest_area;
>
> struct arch_vm_event *vm_event;
>
> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -382,8 +382,10 @@ int pv_shim_shutdown(uint8_t reason)
>
> for_each_vcpu ( d, v )
> {
> - /* Unmap guest vcpu_info pages. */
> + /* Unmap guest vcpu_info page and runstate/time areas. */
> unmap_vcpu_info(v);
> + unmap_guest_area(v, &v->runstate_guest_area);
> + unmap_guest_area(v, &v->arch.time_guest_area);
>
> /* Reset the periodic timer to the default value. */
> vcpu_set_periodic_timer(v, MILLISECS(10));
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -963,7 +963,10 @@ int domain_kill(struct domain *d)
> if ( cpupool_move_domain(d, cpupool0) )
> return -ERESTART;
> for_each_vcpu ( d, v )
> + {
> unmap_vcpu_info(v);
> + unmap_guest_area(v, &v->runstate_guest_area);
> + }
> d->is_dying = DOMDYING_dead;
> /* Mem event cleanup has to go here because the rings
> * have to be put before we call put_domain. */
> @@ -1417,6 +1420,7 @@ int domain_soft_reset(struct domain *d,
> {
> set_xen_guest_handle(runstate_guest(v), NULL);
> unmap_vcpu_info(v);
> + unmap_guest_area(v, &v->runstate_guest_area);
> }
>
> rc = arch_domain_soft_reset(d);
> @@ -1568,6 +1572,19 @@ void unmap_vcpu_info(struct vcpu *v)
> put_page_and_type(mfn_to_page(mfn));
> }
>
> +/*
> + * This is only intended to be used for domain cleanup (or more generally
> only
> + * with at least the respective vCPU, if it's not the current one, reliably
> + * paused).
> + */
> +void unmap_guest_area(struct vcpu *v, struct guest_area *area)
vcpu param could be const given the current code, but I assume this
will be changed by future patches. Same could be said about
guest_area but I'm confident that will change.
> +{
> + struct domain *d = v->domain;
> +
> + if ( v != current )
> + ASSERT(atomic_read(&v->pause_count) | atomic_read(&d->pause_count));
Isn't this racy? What guarantees that the vcpu won't be kicked just
after the check has been performed?
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |