[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] x86/ioreq_server: make p2m_finish_type_change actually work
commit eb13199100dffba1484aac3e72dc7aac2b42629a Author: Xiong Zhang <xiong.y.zhang@xxxxxxxxx> AuthorDate: Wed May 17 17:24:45 2017 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Wed May 17 17:24:45 2017 +0200 x86/ioreq_server: make p2m_finish_type_change actually work Commit 6d774a951696 ("x86/ioreq server: synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps") introduced p2m_finish_type_change(), which was meant to synchronously finish a previously initiated type change over a gpfn range. It did this by calling get_entry(), checking if it was the appropriate type, and then calling set_entry(). Unfortunately, a previous commit (1679e0df3df6 "x86/ioreq server: asynchronously reset outstanding p2m_ioreq_server entries") modified get_entry() to always return the new type after the type change, meaning that p2m_finish_type_change() never changed any entries. Which means when an ioreq server was detached and then re-attached (as happens in XenGT on reboot) the re-attach failed. Fix this by using the existing p2m-specific recalculation logic instead of doing a read-check-write loop. Fix: 'commit 6d774a951696 ("x86/ioreq server: synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps")' Signed-off-by: Xiong Zhang <xiong.y.zhang@xxxxxxxxx> Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx> Release-acked-by: Julien Grall <julien.grall@xxxxxxx> --- xen/arch/x86/hvm/dm.c | 5 +++-- xen/arch/x86/mm/p2m-ept.c | 1 + xen/arch/x86/mm/p2m-pt.c | 1 + xen/arch/x86/mm/p2m.c | 35 +++++++++++++++++++++++------------ xen/include/asm-x86/p2m.h | 9 +++++---- 5 files changed, 33 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index b296d2d..4cf6dee 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -490,8 +490,9 @@ static int dm_op(const struct dmop_args *op_args) first_gfn <= p2m->max_mapped_pfn ) { /* Iterate p2m table for 256 gfns each time. */ - p2m_finish_type_change(d, _gfn(first_gfn), 256, - p2m_ioreq_server, p2m_ram_rw); + rc = p2m_finish_type_change(d, _gfn(first_gfn), 256); + if ( rc < 0 ) + break; first_gfn += 256; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index f98121d..ecab56f 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -1239,6 +1239,7 @@ int ept_p2m_init(struct p2m_domain *p2m) p2m->set_entry = ept_set_entry; p2m->get_entry = ept_get_entry; + p2m->recalc = resolve_misconfig; p2m->change_entry_type_global = ept_change_entry_type_global; p2m->change_entry_type_range = ept_change_entry_type_range; p2m->memory_type_changed = ept_memory_type_changed; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 5079b59..2eddeee 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -1153,6 +1153,7 @@ void p2m_pt_init(struct p2m_domain *p2m) { p2m->set_entry = p2m_pt_set_entry; p2m->get_entry = p2m_pt_get_entry; + p2m->recalc = do_recalc; p2m->change_entry_type_global = p2m_pt_change_entry_type_global; p2m->change_entry_type_range = p2m_pt_change_entry_type_range; p2m->write_p2m_entry = paging_write_p2m_entry; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index c6ec1a4..9eb6dc8 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1030,33 +1030,44 @@ void p2m_change_type_range(struct domain *d, p2m_unlock(p2m); } -/* Synchronously modify the p2m type for a range of gfns from ot to nt. */ -void p2m_finish_type_change(struct domain *d, - gfn_t first_gfn, unsigned long max_nr, - p2m_type_t ot, p2m_type_t nt) +/* + * Finish p2m type change for gfns which are marked as need_recalc in a range. + * Returns: 0/1 for success, negative for failure + */ +int p2m_finish_type_change(struct domain *d, + gfn_t first_gfn, unsigned long max_nr) { struct p2m_domain *p2m = p2m_get_hostp2m(d); - p2m_type_t t; unsigned long gfn = gfn_x(first_gfn); unsigned long last_gfn = gfn + max_nr - 1; - - ASSERT(ot != nt); - ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt)); + int rc = 0; p2m_lock(p2m); last_gfn = min(last_gfn, p2m->max_mapped_pfn); while ( gfn <= last_gfn ) { - get_gfn_query_unlocked(d, gfn, &t); - - if ( t == ot ) - p2m_change_type_one(d, gfn, t, nt); + rc = p2m->recalc(p2m, gfn); + /* + * ept->recalc could return 0/1/-ENOMEM. pt->recalc could return + * 0/-ENOMEM/-ENOENT, -ENOENT isn't an error as we are looping + * gfn here. + */ + if ( rc == -ENOENT ) + rc = 0; + else if ( rc < 0 ) + { + gdprintk(XENLOG_ERR, "p2m->recalc failed! Dom%d gfn=%lx\n", + d->domain_id, gfn); + break; + } gfn++; } p2m_unlock(p2m); + + return rc; } /* diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 7574a9b..408f7da 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -246,6 +246,8 @@ struct p2m_domain { p2m_query_t q, unsigned int *page_order, bool_t *sve); + int (*recalc)(struct p2m_domain *p2m, + unsigned long gfn); void (*enable_hardware_log_dirty)(struct p2m_domain *p2m); void (*disable_hardware_log_dirty)(struct p2m_domain *p2m); void (*flush_hardware_cached_dirty)(struct p2m_domain *p2m); @@ -607,10 +609,9 @@ int p2m_change_type_one(struct domain *d, unsigned long gfn, p2m_type_t ot, p2m_type_t nt); /* Synchronously change the p2m type for a range of gfns */ -void p2m_finish_type_change(struct domain *d, - gfn_t first_gfn, - unsigned long max_nr, - p2m_type_t ot, p2m_type_t nt); +int p2m_finish_type_change(struct domain *d, + gfn_t first_gfn, + unsigned long max_nr); /* Report a change affecting memory types. */ void p2m_memory_type_changed(struct domain *d); -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |