[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 06/15] x86/HVM: improve CET-IBT pruning of ENDBR
On 24/11/2023 8:39 am, Jan Beulich wrote: > __init{const,data}_cf_clobber can have an effect only for pointers > actually populated in the respective tables. While not the case for SVM > right now, VMX installs a number of pointers only under certain > conditions. Hence the respective functions would have their ENDBR purged > only when those conditions are met. Invoke "pruning" functions after > having copied the respective tables, for them to install any "missing" > pointers. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> In theory Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, but see later. I have to admit that I'd overlooked this point when putting together __init{}_cf_clobber originally. Then again, I did have more urgent things on my mind at the time. > --- > This is largely cosmetic for present hardware, which when supporting > CET-IBT likely also supports all of the advanced VMX features for which > hook pointers are installed conditionally. The only case this would make > a difference there is when use of respective features was suppressed via > command line option (where available). For future hooks it may end up > relevant even by default, and it also would be if AMD started supporting > CET-IBT; right now it matters only for .pi_update_irte, as iommu_intpost > continues to default to off. > > Originally I had meant to put the SVM and VMX functions in presmp- > initcalls, but hvm/{svm,vmx}/built_in.o are linked into hvm/built_in.o > before hvm/hvm.o. And I don't think I want to fiddle with link order > here. An alternative is the form I used for microcode, where start_{vmx,svm}() fills in fns, and doesn't have to fill in all hooks. That will be more amenable to Kconfig-ing generally, and will probably be less fragile to getting forgotten. > --- > v2: Use cpu_has_xen_ibt in prune_{svm,vmx}(). > > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -161,10 +161,15 @@ static int __init cf_check hvm_enable(vo > else if ( cpu_has_svm ) > fns = start_svm(); > > + if ( fns ) > + hvm_funcs = *fns; > + > + prune_vmx(); > + prune_svm(); > + > if ( fns == NULL ) > return 0; > > - hvm_funcs = *fns; > hvm_enabled = 1; > > printk("HVM: %s enabled\n", fns->name); > --- a/xen/arch/x86/hvm/svm/svm.c > +++ b/xen/arch/x86/hvm/svm/svm.c > @@ -2587,6 +2587,19 @@ const struct hvm_function_table * __init > return &svm_function_table; > } > > +void __init prune_svm(void) > +{ > + /* > + * Now that svm_function_table was copied, populate all function pointers > + * which may have been left at NULL, for __initdata_cf_clobber to have as > + * much of an effect as possible. > + */ > + if ( !cpu_has_xen_ibt ) > + return; > + > + /* Nothing at present. */ > +} > + > void asmlinkage svm_vmexit_handler(void) > { > struct cpu_user_regs *regs = guest_cpu_user_regs(); > --- a/xen/arch/x86/hvm/vmx/vmx.c > +++ b/xen/arch/x86/hvm/vmx/vmx.c > @@ -3033,6 +3033,30 @@ const struct hvm_function_table * __init > return &vmx_function_table; > } > > +void __init prune_vmx(void) > +{ > + /* > + * Now that vmx_function_table was copied, populate all function pointers > + * which may have been left at NULL, for __initdata_cf_clobber to have as > + * much of an effect as possible. > + */ > + if ( !cpu_has_xen_ibt ) > + return; > + > + vmx_function_table.set_descriptor_access_exiting = > + vmx_set_descriptor_access_exiting; > + > + vmx_function_table.update_eoi_exit_bitmap = vmx_update_eoi_exit_bitmap; > + vmx_function_table.process_isr = vmx_process_isr; > + vmx_function_table.handle_eoi = vmx_handle_eoi; > + > + vmx_function_table.pi_update_irte = vmx_pi_update_irte; > + > + vmx_function_table.deliver_posted_intr = vmx_deliver_posted_intr; > + vmx_function_table.sync_pir_to_irr = vmx_sync_pir_to_irr; > + vmx_function_table.test_pir = vmx_test_pir; That said... This (the hooks being conditional in the first place) is bogus to begin with. Posted interrupts (or not) are a per-VM property even if we don't wire this up properly yet. It will be forced to be done properly in order to support nested virt, as L0 Xen *must* comply with the settings chosen by the L1 hypervisor. So the choice to use the hooks will have to come from per-vCPU state, and not from the conditional-ness of them. Any chance I can talk you into instead making the hooks unconditional? If not, someone (George was volunteering) is going to have to undo this in fairly short order. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |