[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: dom0 PV looping on search_pre_exception_table()


  • To: Manuel Bouyer <bouyer@xxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 9 Dec 2020 18:08:53 +0000
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 09 Dec 2020 18:09:07 +0000
  • Ironport-sdr: ilHGjCB9KEJmVYK57iapJYRZLA7WicDig9i8/+OCfrMqEi/awjt07bWeqo0cMmYsQZN6eZcBhX z8lgvCEvr2LQ0llrSb8IyiHtW1V5ke3fqTODLq8i8memPKcRbtScbN2D9JEqawPT0oKV3kLpnV v/fVFj6S/knaBzv7iSV9zFUI54aHcfN+LsLcvpX/U5P5zLavyd+1IKlxqHCPwyWI/35bsXxTb1 9CdhzXcCVCFILAiEDy+VyVbI5g5COuS4pCI/x1lce2kYfRvCyrbxwZLiGKaCE3mAGNDYn9fFEv vtI=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 09/12/2020 16:30, Manuel Bouyer wrote:
> On Wed, Dec 09, 2020 at 04:00:02PM +0000, Andrew Cooper wrote:
>> [...]
>>>> I wonder if the LDT is set up correctly.
>>> I guess it is, otherwise it wouldn't boot with a Xen 4.13 kernel, isn't it ?
>> Well - you said you always saw it once on 4.13, which clearly shows that
>> something was wonky, but it managed to unblock itself.
>>
>>>> How about this incremental delta?
>>> Here's the output
>>> (XEN) IRET fault: #PF[0000]                                                 
>>>    
>>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057          
>>>    
>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                               
>>>    
>>> (XEN) IRET fault: #PF[0000]                                                 
>>>    
>>> (XEN) %cr2 ffff820000010040, LDT base ffffc4800000a000, limit 0057          
>>>    
>>> (XEN) *** pv_map_ldt_shadow_page(0x40) failed                               
>>>    
>>> (XEN) IRET fault: #PF[0000]                                                 
>> Ok, so the promotion definitely fails, but we don't get as far as
>> inspecting the content of the LDT frame.  This probably means it failed
>> to change the page type, which probably means there are still
>> outstanding writeable references.
>>
>> I'm expecting the final printk to be the one which triggers.
> It's not. 
> Here's the output:
> (XEN) IRET fault: #PF[0000]
> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
> (XEN) *** LDT: gl1e 0000000000000000 not present
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed
> (XEN) IRET fault: #PF[0000]
> (XEN) %cr2 ffff820000010040, LDT base ffffbd000000a000, limit 0057
> (XEN) *** LDT: gl1e 0000000000000000 not present
> (XEN) *** pv_map_ldt_shadow_page(0x40) failed

Ok.  So the mapping registered for the LDT is not yet present.  Xen
should be raising #PF with the guest, and would be in every case other
than the weird context on IRET, where we've confused bad guest state
with bad hypervisor state.

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3ac07a84c3..35c24ed668 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1235,10 +1235,6 @@ static int handle_ldt_mapping_fault(unsigned int
offset,
     {
         printk(XENLOG_ERR "*** pv_map_ldt_shadow_page(%#x) failed\n",
offset);
 
-        /* In hypervisor mode? Leave it to the #PF handler to fix up. */
-        if ( !guest_mode(regs) )
-            return 0;
-
         /* Access would have become non-canonical? Pass #GP[sel] back. */
         if ( unlikely(!is_canonical_address(curr->arch.pv.ldt_base +
offset)) )
         {


This bodge ought to cause a #PF to be delivered suitably, but may make
other corner cases not quite work correctly, so isn't a clean fix.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.