[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v13 12/23] x86: refactor psr: L3 CAT: set value: implement write msr flow.
>>> Yi Sun <yi.y.sun@xxxxxxxxxxxxxxx> 07/13/17 9:34 AM >>> >On 17-07-12 23:20:24, Jan Beulich wrote: >> >>> Yi Sun <yi.y.sun@xxxxxxxxxxxxxxx> 07/13/17 5:00 AM >>> >> >On 17-07-12 13:37:02, Jan Beulich wrote: >> >> >>> Yi Sun <yi.y.sun@xxxxxxxxxxxxxxx> 07/06/17 4:07 AM >>> >> >> >+ if ( socket == cpu_to_socket(smp_processor_id()) ) >> >> >+ do_write_psr_msrs(&data); >> >> >+ else >> >> >+ { >> >> >+ unsigned int cpu = get_socket_cpu(socket); >> >> >+ >> >> >+ if ( cpu >= nr_cpu_ids ) >> >> >+ return -ENOTSOCK; >> >> >+ on_selected_cpus(cpumask_of(cpu), do_write_psr_msrs, &data, 1); >> >> >> >> How frequent an operation can this be? Considering that the actual MSR >> >> write(s) >> >> in the handler is (are) conditional I wonder whether it wouldn't be >> >> worthwhile >> >> trying to avoid the IPI altogether, by pre-checking whether any write >> >> actually >> >> needs doing. >> >> >> >Yes, I think I can check if the value to set is same as >> >'feat->cos_reg_val[cos]' >> >before calling IPI. >> >> Well, as said - whether it's worth the extra effort depends on whether there >> is >> a (reasonable) scenario where this function may be executed frequently. >> >This function is executed when 'psr-cat-set' command is executed. I consult >the libvirt guy, this command may be executed frequently under some scenarios. >E.g. user may dynamically adjust the cache allocation for VMs according to CMT >result. Hmm, that's not something I would call frequent - in the whole invocation of the user mode process the IPI will be lost in the noise. "Frequent" would be something the kernel does without direct user mode triggering, like on the context switch path, in code running from a timer, or some such. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |