[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2] x86/emulate: synchronize LOCKed instruction emulation
>>> On 15.03.17 at 17:04, <rcojocaru@xxxxxxxxxxxxxxx> wrote: > --- > Changes since V1: > - Added Andrew Cooper's credit, as he's kept the patch current > througout non-trivial code changes since the initial patch. > - Significantly more patch testing (with XenServer). > - Restricted lock scope. Not by much, as it seems. In particular you continue to take the lock even for instructions not accessing memory at all. Also, by "reworked" I did assume you mean converted to at least the cmpxchg based model. > --- a/tools/tests/x86_emulator/test_x86_emulator.c > +++ b/tools/tests/x86_emulator/test_x86_emulator.c > @@ -283,6 +283,14 @@ static int read_msr( > return X86EMUL_UNHANDLEABLE; > } > > +static void smp_lock(bool locked) > +{ > +} > + > +static void smp_unlock(bool locked) > +{ > +} I don't think the hooks should be a requirement, and hence these shouldn't be needed. > --- a/xen/arch/x86/domain.c > +++ b/xen/arch/x86/domain.c > @@ -529,6 +529,8 @@ int arch_domain_create(struct domain *d, unsigned int > domcr_flags, > if ( config == NULL && !is_idle_domain(d) ) > return -EINVAL; > > + percpu_rwlock_resource_init(&d->arch.emulate_lock, > emulate_locked_rwlock); This should move into the same file as where the lock and the hook functions live, so that the variable can be static. I'm not sure ... > --- a/xen/arch/x86/hvm/emulate.c > +++ b/xen/arch/x86/hvm/emulate.c > @@ -24,6 +24,8 @@ > #include <asm/hvm/svm/svm.h> > #include <asm/vm_event.h> > > +DEFINE_PERCPU_RWLOCK_GLOBAL(emulate_locked_rwlock); ... this is the right file, though, considering the wide (including PV) use of it. > @@ -1731,6 +1755,8 @@ static const struct x86_emulate_ops > hvm_emulate_ops_no_write = { > .put_fpu = hvmemul_put_fpu, > .invlpg = hvmemul_invlpg, > .vmfunc = hvmemul_vmfunc, > + .smp_lock = emulate_smp_lock, > + .smp_unlock = emulate_smp_unlock, > }; No need for the hooks here. > @@ -5485,6 +5487,8 @@ static const struct x86_emulate_ops mmio_ro_emulate_ops > = { > .write = mmio_ro_emulated_write, > .validate = pv_emul_is_mem_write, > .cpuid = pv_emul_cpuid, > + .smp_lock = emulate_smp_lock, > + .smp_unlock = emulate_smp_unlock, > }; Nor here. > @@ -5524,6 +5528,8 @@ static const struct x86_emulate_ops mmcfg_intercept_ops > = { > .write = mmcfg_intercept_write, > .validate = pv_emul_is_mem_write, > .cpuid = pv_emul_cpuid, > + .smp_lock = emulate_smp_lock, > + .smp_unlock = emulate_smp_unlock, > }; Not sure about this one, but generally I'd expect no LOCKed accesses to MMCFG space. > --- a/xen/arch/x86/traps.c > +++ b/xen/arch/x86/traps.c > @@ -2957,6 +2957,8 @@ static const struct x86_emulate_ops priv_op_ops = { > .write_msr = priv_op_write_msr, > .cpuid = pv_emul_cpuid, > .wbinvd = priv_op_wbinvd, > + .smp_lock = emulate_smp_lock, > + .smp_unlock = emulate_smp_unlock, > }; No need for the hooks again. > @@ -3065,6 +3065,8 @@ x86_emulate( > d = state.desc; > #define state (&state) > > + ops->smp_lock(lock_prefix); There's a "goto complete_insn" upwards from here, which therefore bypasses the acquire, but goes through the release path. Also this is still too early to take the lock. > @@ -7925,8 +7932,11 @@ x86_emulate( > ctxt->regs->eflags &= ~X86_EFLAGS_RF; > > done: > + ops->smp_unlock(lock_prefix); And this, imo, is too late (except for covering error exits coming here). I don't think you can avoid having a local tracking variable. > --- a/xen/arch/x86/x86_emulate/x86_emulate.h > +++ b/xen/arch/x86/x86_emulate/x86_emulate.h > @@ -448,6 +448,14 @@ struct x86_emulate_ops > /* vmfunc: Emulate VMFUNC via given set of EAX ECX inputs */ > int (*vmfunc)( > struct x86_emulate_ctxt *ctxt); > + > + /* smp_lock: Take a write lock if locked, read lock otherwise. */ > + void (*smp_lock)( > + bool locked); > + > + /* smp_unlock: Write unlock if locked, read unlock otherwise. */ > + void (*smp_unlock)( > + bool locked); > }; All hooks should take a ctxt pointer. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |