[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/1 V3] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr



On 7/12/2013 3:01 AM, Jan Beulich wrote:
On 11.07.13 at 19:34, <suravee.suthikulpanit@xxxxxxx> wrote:
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -191,6 +191,19 @@ out:
      return rc;
  }
+int
+nestedhvm_walk_L0_p2m(struct vcpu *v, paddr_t L1_gpa, paddr_t *L0_gpa)
+{
+    p2m_type_t p2mt_10;
+    unsigned int page_order_10;
+    p2m_access_t p2ma_10 = p2m_access_rwx;
Pointless initializer?
These are mostly part of the required function prototype. However, I noticed that I don't need to specify page order.

+
+    return nestedhap_walk_L0_p2m ( p2m_get_hostp2m(v->domain),
Extra spaces around "(".
Ah, thanks.

+                                 L1_gpa, L0_gpa,
+                                 &p2mt_10, &p2ma_10, &page_order_10,
+                                 0, 0, 0);
Wouldn't the caller's use of the GPA imply that you want read and
write access here?
Actually, access_r and access_x is not used in the "nestedhap_walk_L0_p2m" function.
Since we are not writing to this GPA, would we need the write access?

+}
I'm not clear about the need for this new wrapper: Is it really
benign to the caller what type, access, and order get returned
here? Is it really too much of a burden to have the two call
sites do the call here directly? The more that (see above) you'd
really need to give the caller control over the access requested?
Ok, I will just making the nestedhap_walk_L0_p2m not static and add the prototype in the svm.h then.

Finally, considering that now you change a file under
xen/arch/x86/mm/, you should have Cc'ed Tim on the patch
submission.
Thanks for pointing out.

Suravee


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.