[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [for-4.7] x86/emulate: synchronize LOCKed instruction emulation
On 04/18/2016 07:45 PM, Jan Beulich wrote: >>>> Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> 04/18/16 2:40 PM >>> >> On 04/14/2016 07:08 PM, Jan Beulich wrote: >>>>>> Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> 04/14/16 5:45 PM >>> >>>> On 04/14/2016 06:40 PM, Jan Beulich wrote: >>>>> To be honest, just having remembered that we do the write back for locked >>>>> instructions using CMPXCHG, I'd first of all like to see a proper >>>>> description >>>>> of "the _whole_ issue". >>>> >>>> I believe at least part of the issue has to do with the comment on line >>>> 1013 from xen/arch/x86/hvm/emulate.c: >>>> >>> >994 static int hvmemul_cmpxchg( >>> >995 enum x86_segment seg, >>> >996 unsigned long offset, >>> >997 void *p_old, >>> >998 void *p_new, >>> >999 unsigned int bytes, >>>> 1000 struct x86_emulate_ctxt *ctxt) >>>> 1001 { >>>> 1002 struct hvm_emulate_ctxt *hvmemul_ctxt = >>>> 1003 container_of(ctxt, struct hvm_emulate_ctxt, ctxt); >>>> 1004 >>>> 1005 if ( unlikely(hvmemul_ctxt->set_context) ) >>>> 1006 { >>>> 1007 int rc = set_context_data(p_new, bytes); >>>> 1008 >>>> 1009 if ( rc != X86EMUL_OKAY ) >>>> 1010 return rc; >>>> 1011 } >>>> 1012 >>>> 1013 /* Fix this in case the guest is really relying on r-m-w >>>> atomicity. */ >>>> 1014 return hvmemul_write(seg, offset, p_new, bytes, ctxt); >>>> 1015 } >>> >>> Ah, so _that's_ where the problem wants to be fixed then (leaving - afaict - >>> PV emulation paths completely unaffected). >> >> That's what I had hoped too, an early version of this patch simply used >> a spinlock that locked / unlock on entry / exit of hvmemul_cmpxchg(). >> Even with this patch, I've just tried it again with all ops->smp_lock() >> / ops->smp_unlock() calls commented out in x86_emulate(), and >> hvmemul_cmpxchg() modified as follows: >> > >994 static int hvmemul_cmpxchg( > >995 enum x86_segment seg, > >996 unsigned long offset, > >997 void *p_old, > >998 void *p_new, > >999 unsigned int bytes, >> 1000 struct x86_emulate_ctxt *ctxt) >> 1001 { >> 1002 struct hvm_emulate_ctxt *hvmemul_ctxt = >> 1003 container_of(ctxt, struct hvm_emulate_ctxt, ctxt); >> 1004 int rc; >> 1005 >> 1006 emulate_smp_lock(1); >> 1007 >> 1008 if ( unlikely(hvmemul_ctxt->set_context) ) >> 1009 { >> 1010 int rc = set_context_data(p_new, bytes); >> 1011 >> 1012 if ( rc != X86EMUL_OKAY ) >> 1013 return rc; >> 1014 } >> 1015 >> 1016 /* Fix this in case the guest is really relying on r-m-w >> atomicity. */ >> 1017 rc = hvmemul_write(seg, offset, p_new, bytes, ctxt); >> 1018 >> 1019 emulate_smp_unlock(1); >> 1020 >> 1021 return rc; >> 1022 } >> >> Unfortunately, with just this the guest still hangs, while with read and >> write locking in x86_emulate() it does not. > > That's unexpected at least at the first glance, but justifying some variant of > the patch you have submitted would require understanding why the above > change isn't enough and can't be suitably extended to be sufficient. I think this might be because the LOCK prefix should guarantee that the instruction that follows it has exclusive use of shared memory (for both reads and writes) but I might be misreading the docs: (From the Intel SDM) "Causes the processor’s LOCK# signal to be asserted during execution of the accompanying instruction (turns the instruction into an atomic instruction). In a multiprocessor environment, the LOCK# signal ensures that the processor has exclusive use of any shared memory while the signal is asserted." Using a spin lock (or the rwlock in the manner described above) in hvmemul_cmpxchg() only prevents two LOCKed instructions from running at the same time, but presumably even non-LOCKed instructions just reading that memory area should be prevented from running until the LOCKed instruction is done (hence the guest starting up with the rwlock in x86_emulate() but not with it locked only in hvmemul_cmpxchg()). Hopefully I haven't misunderstood the issue. Thanks, Razvan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |