[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check()
Commit 0537d246f8db ("mm: add 'is_special_page' inline function...") moved the is_special_page() checks first in its respective changes to PoD code. While this is fine for p2m_pod_zero_check_superpage(), the validity of the MFN is inferred in both cases from the p2m_is_ram() check, which therefore also needs to come first in this 2nd instance. Take the opportunity and address latent UB here as well - transform the MFN into struct page_info * only after having established that this is a valid page. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- I will admit that this was build tested only. I did observe the crash late yesterday while in the office, but got around to analyzing it only today, where I'm again restricted in what I can reasonably test. --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -877,23 +877,25 @@ p2m_pod_zero_check(struct p2m_domain *p2 for ( i = 0; i < count; i++ ) { p2m_access_t a; - struct page_info *pg; mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL); - pg = mfn_to_page(mfns[i]); /* * If this is ram, and not a pagetable or a special page, and * probably not mapped elsewhere, map it; otherwise, skip. */ - if ( !is_special_page(pg) && p2m_is_ram(types[i]) && - (pg->count_info & PGC_allocated) && - !(pg->count_info & PGC_page_table) && - ((pg->count_info & PGC_count_mask) <= max_ref) ) - map[i] = map_domain_page(mfns[i]); - else - map[i] = NULL; + map[i] = NULL; + if ( p2m_is_ram(types[i]) ) + { + const struct page_info *pg = mfn_to_page(mfns[i]); + + if ( !is_special_page(pg) && + (pg->count_info & PGC_allocated) && + !(pg->count_info & PGC_page_table) && + ((pg->count_info & PGC_count_mask) <= max_ref) ) + map[i] = map_domain_page(mfns[i]); + } } /*
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |