[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86: use VMLOAD for PV context switch
On 9/11/18 10:38 AM, Jan Beulich wrote: >>>> On 11.09.18 at 16:17, <boris.ostrovsky@xxxxxxxxxx> wrote: >> On 9/11/18 3:54 AM, Jan Beulich wrote: >>>>>> On 10.09.18 at 23:56, <boris.ostrovsky@xxxxxxxxxx> wrote: >>>> On 09/10/2018 10:03 AM, Jan Beulich wrote: >>>>> Having noticed that VMLOAD alone is about as fast as a single of the >>>>> involved WRMSRs, I thought it might be a reasonable idea to also use it >>>>> for PV. Measurements, however, have shown that an actual improvement can >>>>> be achieved only with an early prefetch of the VMCB (thanks to Andrew >>>>> for suggesting to try this), which I have to admit I can't really >>>>> explain. This way on my Fam15 box context switch takes over 100 clocks >>>>> less on average (the measured values are heavily varying in all cases, >>>>> though). >>>>> >>>>> This is intentionally not using a new hvm_funcs hook: For one, this is >>>>> all about PV, and something similar can hardly be done for VMX. >>>>> Furthermore the indirect to direct call patching that is meant to be >>>>> applied to most hvm_funcs hooks would be ugly to make work with >>>>> functions having more than 6 parameters. >>>>> >>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >>>>> Acked-by: Brian Woods <brian.woods@xxxxxxx> >>>>> --- >>>>> v2: Re-base. >>>>> --- >>>>> Besides the mentioned oddity with measured performance, I've also >>>>> noticed a significant difference (of at least 150 clocks) between >>>>> measuring immediately around the calls to svm_load_segs() and measuring >>>>> immediately inside the function. >>>>> >>>> >>>> >>>>> >>>>> +#ifdef CONFIG_PV >>>>> +bool svm_load_segs(unsigned int ldt_ents, unsigned long ldt_base, >>>>> + unsigned int fs_sel, unsigned long fs_base, >>>>> + unsigned int gs_sel, unsigned long gs_base, >>>>> + unsigned long gs_shadow) >>>>> +{ >>>>> + unsigned int cpu = smp_processor_id(); >>>>> + struct vmcb_struct *vmcb = per_cpu(host_vmcb_va, cpu); >>>>> + >>>>> + if ( unlikely(!vmcb) ) >>>>> + return false; >>>>> + >>>>> + if ( !ldt_base ) >>>>> + { >>>>> + asm volatile ( "prefetch %0" :: "m" (vmcb->ldtr) ); >>>>> + return true; >>>> >>>> >>>> Could you explain why this is true? We haven't loaded FS/GS here. >>> >>> A zero ldt_base argument indicates a prefetch request. This is an >>> agreement between callers of the function and its implementation. >> >> >> Oh, so this is what svm_load_segs(0, 0, 0, 0, 0, 0, 0) is for? >> >> If yes then IMO a separate call would make things a bit clearer, >> especially since the return value is ignored. > > Well, to me having a single central place where everything gets done > seemed better. And it looks as if Brian agreed, considering I already > have his ack for the patch. Let me know if you feel strongly. I would at least like to have a comment explaining the calling convention. > >>>> I also couldn't find discussion about prefetch --- why is prefetching >>>> ldtr expected to help? >>> >>> See the patch description. ldtr as the element is a pretty random >>> choice between the various fields VMLOAD touches. It's (presumably) >>> more the page walk than the actual cache line(s) that we want to >>> be pulled in ahead of time. I can only guess that VMLOAD execution >>> is more "synchronous" wrt its memory accesses and/or latency to >>> completion than other (simpler) instructions. >> >> I think a code comment would be very helpful (including the fact that >> ldtr is an arbitrary field), even if this is mentioned in the commit >> message. > > I would likely have added a comment if I could firmly state what's > going on. But this is derived from experiments only - I'd require > AMD to fill in the holes before I could write a (useful) comment. Well, since we have actual code we should be able to explain why we have it ;-). Even if this is speculation on your part. Otherwise someone looking at this will (likely?) have no idea about what's going on, and doing git blame doesn't always work. -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |