[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 08/11] xen: arch-specific hooks for domain_soft_reset()
On Tue, Jul 28, 2015 at 03:28:13PM +0200, Vitaly Kuznetsov wrote: > x86-specific hook cleans up the pirq-emuirq mappings, destroys all ioreq > servers and and replaces the shared_info frame with an empty page to support > subsequent XENMAPSPACE_shared_info call. > > ARM-specific hook is -ENOSYS for now. > > Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> > --- > Changes since v9: > - is_hvm_domain() -> has_hvm_container_domain() to support PVH [Jan Beulich] > - introduce hvm_domain_soft_reset() to avoid making > hvm_destroy_all_ioreq_servers() public [Jan Beulich] > - ASSERT( owner == d ) as we unconditionally do share_xen_page_with_guest() > on domain create path [Jan Beulich] > - crash the domain when arch-specific hook fails [Jan Beulich] > - eliminate mfn_new variable [Jan Beulich] > - proper check for 'shared_info was never mapped' case. > --- > xen/arch/arm/domain.c | 5 +++ > xen/arch/x86/domain.c | 81 > +++++++++++++++++++++++++++++++++++++++++++ > xen/arch/x86/hvm/hvm.c | 5 +++ > xen/common/domain.c | 7 ++++ > xen/include/asm-x86/hvm/hvm.h | 1 + > xen/include/xen/domain.h | 2 ++ > 6 files changed, 101 insertions(+) > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c > index b2bfc7d..5bdc2e9 100644 > --- a/xen/arch/arm/domain.c > +++ b/xen/arch/arm/domain.c > @@ -655,6 +655,11 @@ void arch_domain_unpause(struct domain *d) > { > } > > +int arch_domain_soft_reset(struct domain *d) > +{ > + return -ENOSYS; > +} > + > static int is_guest_pv32_psr(uint32_t psr) > { > switch (psr & PSR_MODE_MASK) > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c > index 045f6ff..afc3e1e 100644 > --- a/xen/arch/x86/domain.c > +++ b/xen/arch/x86/domain.c > @@ -710,6 +710,87 @@ void arch_domain_unpause(struct domain *d) > viridian_time_ref_count_thaw(d); > } > > +int arch_domain_soft_reset(struct domain *d) > +{ > + struct page_info *page = virt_to_page(d->shared_info), *new_page; > + int ret = 0; > + struct domain *owner; > + unsigned long mfn, gfn; > + p2m_type_t p2mt; > + unsigned int i; > + > + /* Soft reset is supported for HVM/PVH domains only. */ > + if ( !has_hvm_container_domain(d) ) > + return -EINVAL; > + > + hvm_domain_soft_reset(d); > + > + spin_lock(&d->event_lock); > + for ( i = 0; i < d->nr_pirqs ; i++ ) > + { > + if ( domain_pirq_to_emuirq(d, i) != IRQ_UNBOUND ) > + { > + ret = unmap_domain_pirq_emuirq(d, i); > + if ( ret ) > + break; > + } > + } > + spin_unlock(&d->event_lock); > + > + if ( ret ) > + return ret; > + > + /* > + * The shared_info page needs to be replaced with a new page, otherwise > we > + * will get a hole if the domain does XENMAPSPACE_shared_info. > + */ > + > + owner = page_get_owner_and_reference(page); > + ASSERT( owner == d ); > + > + mfn = page_to_mfn(page); > + gfn = mfn_to_gmfn(d, mfn); > + > + /* > + * gfn == INVALID_GFN indicates that the shared_info page was never > mapped > + * to the domain's address space and there is nothing to replace. > + */ > + if ( gfn == INVALID_GFN ) > + goto exit_put_page; > + > + if ( mfn_x(get_gfn_query(d, gfn, &p2mt)) != mfn ) > + { > + printk(XENLOG_G_ERR "Failed to get Dom%d's shared_info GFN (%lx)\n", > + d->domain_id, gfn); > + ret = -EINVAL; > + goto exit_put_page; > + } > + > + new_page = alloc_domheap_page(d, 0); > + if ( !new_page ) > + { > + printk(XENLOG_G_ERR "Failed to alloc a page to replace" > + " Dom%d's shared_info frame %lx\n", d->domain_id, gfn); > + ret = -ENOMEM; > + goto exit_put_gfn; > + } > + guest_physmap_remove_page(d, gfn, mfn, PAGE_ORDER_4K); > + > + ret = guest_physmap_add_page(d, gfn, page_to_mfn(new_page), > PAGE_ORDER_4K); > + if ( ret ) > + { > + printk(XENLOG_G_ERR "Failed to add a page to replace" > + " Dom%d's shared_info frame %lx\n", d->domain_id, gfn); > + free_domheap_page(new_page); > + } > + exit_put_gfn: > + put_gfn(d, gfn); > + exit_put_page: > + put_page(page); > + > + return ret; > +} > + > /* > * These are the masks of CR4 bits (subject to hardware availability) which a > * PV guest may not legitimiately attempt to modify. > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c > index ec1d797..2bd7f0f 100644 > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -6800,6 +6800,11 @@ bool_t altp2m_vcpu_emulate_ve(struct vcpu *v) > return 0; > } > > +void hvm_domain_soft_reset(struct domain *d) > +{ > + hvm_destroy_all_ioreq_servers(d); > +} > + > /* > * Local variables: > * mode: C > diff --git a/xen/common/domain.c b/xen/common/domain.c > index 4c8e6a2..ffc8740 100644 > --- a/xen/common/domain.c > +++ b/xen/common/domain.c > @@ -1066,6 +1066,13 @@ int domain_soft_reset(struct domain *d) > for_each_vcpu ( d, v ) > unmap_vcpu_info(v); > > + rc = arch_domain_soft_reset(d); > + if ( rc ) > + { > + domain_crash(d); > + return rc; > + } > + > domain_resume(d); > > return 0; > diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h > index 425327a..caeccf9 100644 > --- a/xen/include/asm-x86/hvm/hvm.h > +++ b/xen/include/asm-x86/hvm/hvm.h > @@ -226,6 +226,7 @@ extern const struct hvm_function_table *start_vmx(void); > int hvm_domain_initialise(struct domain *d); > void hvm_domain_relinquish_resources(struct domain *d); > void hvm_domain_destroy(struct domain *d); > +void hvm_domain_soft_reset(struct domain *d); > > int hvm_vcpu_initialise(struct vcpu *v); > void hvm_vcpu_destroy(struct vcpu *v); > diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h > index 848db8a..a469fe0 100644 > --- a/xen/include/xen/domain.h > +++ b/xen/include/xen/domain.h > @@ -65,6 +65,8 @@ void arch_domain_shutdown(struct domain *d); > void arch_domain_pause(struct domain *d); > void arch_domain_unpause(struct domain *d); > > +int arch_domain_soft_reset(struct domain *d); > + > int arch_set_info_guest(struct vcpu *, vcpu_guest_context_u); > void arch_get_info_guest(struct vcpu *, vcpu_guest_context_u); > > -- > 2.4.3 > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |