[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.





On 10/08/2016 11:32, Sergej Proskurin wrote:
Hi Julien,

Hello Sergej,

[...]

     switch ( fsc )
     {
+    case FSC_FLT_TRANS:
+    {
+        if ( altp2m_active(d) )
+        {
+            const struct npfec npfec = {
+                .insn_fetch = 1,
+                .gla_valid = 1,
+                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
npfec_kind_with_gla
+            };
+
+            /*
+             * Copy the entire page of the failing instruction
into the
+             * currently active altp2m view.
+             */
+            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
+                return;

I forgot to mention that I think there is a race condition here. If
multiple vCPU (let say A and B) use the same altp2m, they may fault
here.

If vCPU A already fixed the fault, this function will return false and
continue. So this will lead to inject an instruction abort to the
guest.


I have solved this issue as well:

In altp2m_lazy_copy, we check whether the faulting address is already
mapped in the current altp2m view. The only reason why the current
altp2m should have a valid entry for the apparently faulting address is
that it was previously (almost simultaneously) mapped by another vcpu.
That is, if the mapping for the faulting address is valid in the altp2m,
we return true and hence let the guest retry (without injecting an
instruction/data abort exception) to access the address in question.

I am afraid that your description does not match the implementation of altp2m_lazy_copy in this version of the patch series.

If you find a valid entry in the altp2m, you will return 0 (i.e false). This will lead to inject an abort into the guest.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.