[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/4] x86: eliminate most XPTI entry/exit code when it's not in use
>>> On 05.02.18 at 18:28, <andrew.cooper3@xxxxxxxxxx> wrote: > On 30/01/18 13:51, Jan Beulich wrote: >>>> --- a/xen/arch/x86/x86_64/compat/entry.S >>>> +++ b/xen/arch/x86/x86_64/compat/entry.S >>>> @@ -189,7 +189,7 @@ ENTRY(compat_post_handle_exception) >>>> >>>> /* See lstar_enter for entry register state. */ >>>> ENTRY(cstar_enter) >>>> - /* sti could live here when we don't switch page tables below. */ >>>> + ALTERNATIVE nop, sti, X86_FEATURE_NO_XPTI >>> I do not think the complexity of of altering the position of sti >>> outweighs the fractional extra delay which would result from >>> unilaterally having the sti later. Furthermore, if you really are >>> concerned about microoptimising this, you don't want a singlebyte nop here. >>> >>>> CR4_PV32_RESTORE >> There is, not the least, this, which I'm afraid is adding quite a bit >> of a delay. While we're not real-time ready, I don't think we should >> needlessly delay the enabling of interrupts. > > The lion share of delay in the serialising effects of the write to cr4, > which also blocks interrupts. What is the main cause for the delay doesn't matter - interrupts necessarily will be held off for the duration of the execution of this one instruction. However, with the STI ahead of it, interrupts which became pending _before_ the CR4 write would have been serviced already. IOW by moving the STI ahead we're splitting an interrupts-delayed window into two. >>>> --- a/xen/arch/x86/x86_64/entry.S >>>> +++ b/xen/arch/x86/x86_64/entry.S >>>> @@ -46,7 +47,6 @@ restore_all_guest: >>>> movabs $DIRECTMAP_VIRT_START, %rcx >>>> mov %rdi, %rax >>>> and %rsi, %rdi >>>> - jz .Lrag_keep_cr3 >>> This looks like a functional change? >> Definitely not - the conditional branch simply becomes unnecessary >> when the entire piece of code gets NOP-ed out. > > Hmm. That is not at all obvious. What about about cases were we'd want > to conditionally disable xpti on a per domain basis? I would be at that time that the branch would need to be re-introduced. But I hope we'll never make it there, and have a better solution in place before we introduce a per-domain control for this. >>>> @@ -492,9 +519,20 @@ ENTRY(common_interrupt) >>>> CR4_PV32_RESTORE >>>> movq %rsp,%rdi >>>> callq do_IRQ >>>> +.Lintr_cr3_restore: >>>> mov %r15, STACK_CPUINFO_FIELD(xen_cr3)(%r14) >>>> +.Lintr_cr3_end: >>>> jmp ret_from_intr >>>> >>>> + .pushsection .altinstructions, "a", @progbits >>>> + altinstruction_entry .Lintr_cr3_restore, .Lintr_cr3_restore, \ >>>> + X86_FEATURE_NO_XPTI, \ >>>> + (.Lintr_cr3_end - .Lintr_cr3_restore), 0 >>>> + altinstruction_entry .Lintr_cr3_start, .Lintr_cr3_start, \ >>>> + X86_FEATURE_NO_XPTI, \ >>>> + (.Lintr_cr3_okay - .Lintr_cr3_start), 0 >>> This is now getting very complicated to follow. Is it just for IST >>> safety and liable to disappear? If not, I think we need a different >>> way,as this is now saying "sporadic instructions inside this block, but >>> not all of them, turn into nops". >> This is not an IST path, and it is also not NOP-ing out sporadic >> instructions - we can't drop the first piece of code without also >> dropping the second, as %r14 won't be set up if the first block >> is gone. They're clearly framed by .Lintr_cr3_* labels - I'm not >> sure how to make even more obvious what's going on. > > I know you're not a fan of my SPEC_CTRL macros, but they do make it very > clear what is going on in each configuration. > > They are certainly clearer than this. A matter of personal preference, I'm afraid. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |