[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [PATCH 3 of 4] Nested p2m: clarify logic in p2m_get_nestedp2m()
On 06/22/11 18:10, Tim Deegan wrote: # HG changeset patch # User Tim Deegan<Tim.Deegan@xxxxxxxxxx> # Date 1308758648 -3600 # Node ID b265371addbbc8a58c95a269fe3cd0fdc866aaa3 # Parent dcb8ae5e3eaf6516c889087dfb15efa41a1ac3e9 Nested p2m: clarify logic in p2m_get_nestedp2m() This just makes the behaviour of this function a bit more explicit. It may be that it also needs to be changed. :) Signed-off-by: Tim Deegan<Tim.Deegan@xxxxxxxxxx> diff -r dcb8ae5e3eaf -r b265371addbb xen/arch/x86/mm/p2m.c --- a/xen/arch/x86/mm/p2m.c Wed Jun 22 17:04:08 2011 +0100 +++ b/xen/arch/x86/mm/p2m.c Wed Jun 22 17:04:08 2011 +0100 @@ -1131,11 +1131,9 @@ p2m_get_nestedp2m(struct vcpu *v, uint64 d = v->domain; nestedp2m_lock(d); - for (i = 0; i< MAX_NESTEDP2M; i++) { - p2m = d->arch.nested_p2m[i]; - if ((p2m->cr3 != cr3&& p2m->cr3 != CR3_EADDR) || (p2m != nv->nv_p2m)) - continue; - + p2m = nv->nv_p2m; + if ( p2m&& (p2m->cr3 == cr3 || p2m->cr3 == CR3_EADDR) ) + { nv->nv_flushp2m = 0; p2m_getlru_nestedp2m(d, p2m); nv->nv_p2m = p2m; Ok, thanks. In p2m_get_nestedp2m() replace this code hunk for (i = 0; i < MAX_NESTEDP2M; i++) { p2m = p2m_getlru_nestedp2m(d, NULL); p2m_flush_locked(p2m); } with p2m = p2m_getlru_nestedp2m(d, NULL); p2m_flush_locked(p2m); The 'i'' variable is unused then. This fixes an endless loop of nested page faults I observe with SMP l2 guests. The nested page fault loop happens in conjunction with the change in patch 4 in nestedhap_fix_p2m(). Christoph -- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85689 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |