[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v8 1/5] x86: add simple udelay calibration
Hi, Lu At 07/13/2017 11:00 AM, Lu Baolu wrote: Hi, On 07/13/2017 09:39 AM, Dou Liyang wrote:Hi, Lu At 07/13/2017 09:17 AM, Lu Baolu wrote:Hi, On 07/12/2017 04:02 PM, Dou Liyang wrote:Hi, Lu At 05/05/2017 08:50 PM, Boris Ostrovsky wrote:On 05/05/2017 01:41 AM, Lu Baolu wrote:Hi, On 05/03/2017 06:38 AM, Boris Ostrovsky wrote:On 03/21/2017 04:01 AM, Lu Baolu wrote:Add a simple udelay calibration in x86 architecture-specific boot-time initializations. This will get a workable estimate for loops_per_jiffy. Hence, udelay() could be used after this initialization.This breaks Xen PV guests since at this point, and until x86_init.paging.pagetable_init() which is when pvclock_vcpu_time_info is mapped, they cannot access pvclock. Is it reasonable to do this before tsc_init() is called? (The failure has nothing to do with tsc_init(), really --- it's just that it is called late enough that Xen PV guests get properly initialized.) If it is, would it be possible to move simple_udelay_calibration() after x86_init.paging.pagetable_init()?This is currently only used for bare metal. How about by-pass it for Xen PV guests?It is fixed this for Xen PV guests now (in the sense that we don't crash anymore) but my question is still whether this is not too early. Besides tsc_init() (which might not be important here), at the time when simple_udelay_calibration() is invoked we haven't yet called: * kvmclock_init(), which sets calibration routines for KVM * init_hypervisor_platform(), which sets calibration routines for vmware and Xen HVM * x86_init.paging.pagetable_init(), which sets calibration routines for Xen PVI guess these may have been missed. Do you have any comments about these?The patch will be available in 4.13-rc1.Yes, I have seen it in the upstream. Firstly, I also met this problem want to call udelay() earlier than *loops_per_jiffy* setup like you[1]. So I am very interesting in this patch. ;) I am also confused about the questions which Boris asked: whether do the CPU and TSC calibration too early just for using udelay()? this design broke our interface of x86_paltform.calibrate_cpu/tsc. And I also have a question below. [...]+static void __init simple_udelay_calibration(void) +{ + unsigned int tsc_khz, cpu_khz; + unsigned long lpj; + + if (!boot_cpu_has(X86_FEATURE_TSC)) + return;if we don't have the TSC feature in booting CPU and it returns here, can we use udelay() correctly like before?If we have TSC feature, we calculate a preciser loops_per_jiffy here. Otherwise, we just keep it as before. This function doesn't broke the use of udelay(). Oh, I see.In XDbC (XHCI debug capability), we just want the udelay() work more precisely in the TSC supported system. It is different with my problem I missed. Thanks for your kind explanation. :) Thanks, dou Best regards, Lu Baolu[1] https://lkml.org/lkml/2017/7/3/276 Thanks, dou.Thanks, dou.+ + cpu_khz = x86_platform.calibrate_cpu(); + tsc_khz = x86_platform.calibrate_tsc(); + + tsc_khz = tsc_khz ? : cpu_khz; + if (!tsc_khz) + return; + + lpj = tsc_khz * 1000; + do_div(lpj, HZ); + loops_per_jiffy = lpj; +} + /* * Determine if we were loaded by an EFI loader. If so, then we have also been * passed the efi memmap, systab, etc., so we should use these data structures @@ -985,6 +1005,8 @@ void __init setup_arch(char **cmdline_p) */ x86_configure_nx(); + simple_udelay_calibration(); + parse_early_param(); #ifdef CONFIG_MEMORY_HOTPLUG-- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html-- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |