[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] SVM: re-work VMCB sync-ing
>>> On 26.04.18 at 14:43, <andrew.cooper3@xxxxxxxxxx> wrote: > FWIW, In my not-yet-complete patch for the issue, I'd gone with > > @@ -90,7 +91,14 @@ UNLIKELY_END(svm_trace) > pop %r13 > pop %r12 > pop %rbp > + > mov VCPU_svm_vmcb_pa(%rbx),%rax > + cmpb $0, VCPU_svm_vmload_needed(%rbx) > + je 1f > + VMLOAD > + movb $0, VCPU_svm_vmload_needed(%rbx) > +1: > + > pop %rbx > pop %r11 > pop %r10 > > (albeit with a slightly different state structure) and was considering > putting the VMLOAD in an unlikely section to get the static prediction > the correct way around. Well, I'd really like to avoid putting this in assembly. In fact I've meanwhile coded up a follow-on patch introducing svm_vmenter_helper(), where this as well as the register value copying (currently also done in assembly for no reason) now lives. >> --- a/xen/arch/x86/hvm/svm/svm.c >> +++ b/xen/arch/x86/hvm/svm/svm.c >> @@ -680,16 +680,15 @@ static void svm_cpuid_policy_changed(str >> cp->extd.ibpb ? MSR_INTERCEPT_NONE : >> MSR_INTERCEPT_RW); >> } >> >> -static void svm_sync_vmcb(struct vcpu *v) >> +static void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state) >> { >> struct arch_svm_struct *arch_svm = &v->arch.hvm_svm; >> >> - if ( arch_svm->vmcb_in_sync ) >> - return; >> - >> - arch_svm->vmcb_in_sync = 1; >> + if ( arch_svm->vmcb_sync_state == vmcb_needs_vmsave ) >> + svm_vmsave(arch_svm->vmcb); >> >> - svm_vmsave(arch_svm->vmcb); >> + if ( arch_svm->vmcb_sync_state != vmcb_needs_vmload ) >> + arch_svm->vmcb_sync_state = new_state; > > This is slightly awkward for a couple of reasons. First, passing > vmcb_in_sync in forget the fact that a vmload is needed. Certainly not - that's the purpose of the if() around it. > In my patch, I introduced svm_sync_vmcb_for_update(), rather than > requiring a parameter to be passed in. I think this is a better API, > and it shrinks the size of the patch. I'm not convinced of the "better", and even less so of the "shrinks". But I'll wait to see what the SVM maintainers say. > Also, we currently need segment register codepaths to work from > non-current context, and will very shortly want the MSR paths to do the > same. > > By putting an ASSERT(v == current) beside the > svm_vmsave(arch_svm->vmcb), we can drop the (v == current) check in the > callers, and catch corner cases where state isn't in sync for a remote > update. That's an option, but imo not a requrirement. >> @@ -1086,7 +1079,7 @@ static void svm_ctxt_switch_from(struct >> svm_lwp_save(v); >> svm_tsc_ratio_save(v); >> >> - svm_sync_vmcb(v); >> + svm_sync_vmcb(v, vmcb_needs_vmload); >> svm_vmload_pa(per_cpu(host_vmcb, cpu)); >> >> /* Resume use of ISTs now that the host TR is reinstated. */ >> @@ -1114,7 +1107,6 @@ static void svm_ctxt_switch_to(struct vc >> svm_restore_dr(v); >> >> svm_vmsave_pa(per_cpu(host_vmcb, cpu)); > > As an observation, this vmsave isn't needed. All state in host_vmcb is > either constant after the AP bringup-path, or loaded properly in the PV > ctxt_switch_to() path. > > This can be some future perf improvements though. Yeah, I was wondering about the need for this, but since it's orthogonal to the purpose of the patch, I didn't want to touch it here. >> --- a/xen/include/asm-x86/hvm/svm/vmcb.h >> +++ b/xen/include/asm-x86/hvm/svm/vmcb.h >> @@ -495,12 +495,19 @@ struct vmcb_struct { >> struct svm_domain { >> }; >> >> +enum vmcb_sync_state { >> + vmcb_in_sync, >> + vmcb_needs_vmsave, /* VMCB out of sync (VMSAVE needed)? */ >> + vmcb_needs_vmload /* VMCB dirty (VMLOAD needed)? */ >> +}; > > If I were you, I'd go with > > enum vmcb_sync_state { > vmcb_needs_vmsave, > vmcb_in_sync, > vmcb_needs_vmload, > }; > > So it numerically follows the logical progression. Much depends on what you consider the "beginning" state. It could equally well be the inverse of what you suggest, taking the initial VMCB as the first step in the life cycle. >> + >> struct arch_svm_struct { >> struct vmcb_struct *vmcb; >> u64 vmcb_pa; >> unsigned long *msrpm; >> int launch_core; >> - bool_t vmcb_in_sync; /* VMCB sync'ed with VMSAVE? */ >> + > > /* > * VMRUN doesn't switch fs/gs/tr/ldtr and SHADOWGS/SYSCALL/SYSENTER state. > * Therefore, guest state is in the hardware registers when servicing a > * VMExit. > * > * Immediately after a VMExit, the vmcb is stale, and needs to be brought > * into sync by VMSAVE. If state in the vmcb is modified, a VMLOAD is > * needed before the following VMRUN. > */ Well, yes, I can certainly add this, but if I do, then next to the enum I think. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |