[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 13/14] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN
On 10/05/2019 14:41, Jan Beulich wrote: On 10.05.19 at 15:34, <julien.grall@xxxxxxx> wrote:On 10/05/2019 14:21, Jan Beulich wrote:On 07.05.19 at 17:14, <julien.grall@xxxxxxx> wrote:@@ -1030,19 +1031,19 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) @@ -2795,54 +2795,54 @@ void audit_p2m(struct domain *d, spin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { - mfn = mfn_x(page_to_mfn(page)); + mfn = page_to_mfn(page);- P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn);+ P2M_PRINTK("auditing guest page, mfn=%"PRI_mfn"\n", mfn_x(mfn));od = page_get_owner(page); if ( od != d ){ - P2M_PRINTK("wrong owner %#lx -> %p(%u) != %p(%u)\n", - mfn, od, (od?od->domain_id:-1), d, d->domain_id); + P2M_PRINTK("wrong owner %"PRI_mfn" -> %p(%u) != %p(%u)\n", + mfn_x(mfn), od, (od?od->domain_id:-1), d, d->domain_id);Please be careful not to drop 0x prefixes from the resulting output (which are an effect of the # flag that you delete), at least when log messages contain a mix of hex and dec numbers. (I am, btw, not convinced that switching to PRI_mfn here is helpful.)Last time I keeped %# for MFN, I have been asked to remove the #. I prefer having 0x for all the hex, and I am happy to be keep as is. But I would like a bit of consistency on the way we print MFN...Well, "%#"PRI_mfn is bogus (because of the combination with the minimum width specification), so it ought to be "%#lx" or "0x%"PRI_mfn. Have you really been asked for something else? If so, and if it was me, then I apologize. I am not sure why this is bogus. The thing is using different format for the MFN makes it difficult to read a message without looking format string. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |