[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for-4.10 2/2] xen/arm: p2m: Add more debug in get_page_from_gva
Hi Andrew, On 11/15/2017 07:43 PM, Andrew Cooper wrote: On 15/11/17 19:34, Julien Grall wrote:The function get_page_from_gva is used by copy_*_guest helpers to translate a guest virtual address to a machine physical address and take reference on the page. There are a couple of errors path that will return the same value making difficult to know the exact error. Add more debug in each error patch only for debug-build. This should help narrowing down the intermittent failure with the hypercall GNTTABOP_copy (see [1]). [1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> --- xen/arch/arm/p2m.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index f6b3d8e421..417609ede2 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1428,16 +1428,29 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va, par = gvirt_to_maddr(va, &maddr, flags);if ( par )+ { + dprintk(XENLOG_G_DEBUG, + "%pv: gvirt_to_maddr failed va=%#"PRIvaddr" flags=0x%lx par=%#"PRIx64"\n", + v, va, flags, par);Given the long round-trip time on debugging output, how about trying to dump the guest and/or second stage table walk? I thought about it, however at the moment dump_s1_guest_walk() is very minimal and would be add much value here. Thought, Now that we have code to do first-stage walk (see guest_walk_tables), we might be able to get a better dump here. Thought I am not sure it would be 4.10 material. However, I think we could try to translate the guest VA to a guest PA using hardware instruction and then do the second-stage walk using dump_p2m_lookup. Let me have a look. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |