[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Core parking feature enable
>>> On 02.03.12 at 10:42, Haitao Shan <maillists.shan@xxxxxxxxx> wrote: > I would really doubt the need to create a new interface of receiving > ACPI event and sending to user land (other than existing native > kernel) specifically for Xen. What's the benefit and why kernel people > should buy-in that? Receiving ACPI events??? I don't recall having seen anything like that in the proposed patch (since if that was the case, I would probably agree that this is better done in kernel/hypervisor). Was what got sent out perhaps just a small fraction of what is intended, and was it forgotten to mention this? Jan > Core parking is a platform feature, not virtualization feature. > Naturally following native approach is the most efficient. Why do you > want to create yet another interface for Xen to do that? > > Shan Haitao > > 2012/3/1 Jan Beulich <JBeulich@xxxxxxxx>: >>>>> On 01.03.12 at 15:31, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote: >>> Jan Beulich wrote: >>>>>>> On 01.03.12 at 12:14, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote: >>>>> Unfortunately, yes, though cumbersome is not basic reason user space >>>>> approach is not preferred. Core parking is a power management staff, >>>>> based on dynamic physical details like cpu topologies and maps owned >>>>> by hypervisor. It's natural to implement >>>> >>>> CPU topology is available to user space, and as far as I recall your >>>> hypervisor patch didn't really manipulate any maps - all it did was >>>> pick what CPU to bring up/down, and then carry out that decision. >>> >>> No. threads_per_core and cores_per_socket exposed to userspace is pointless >>> to us (and, it's questionable need fixup). >> >> Sure this would be insufficient. But what do you think did >> XEN_SYSCTL_topologyinfo get added for? >> >>> Core parking depends on following physical info (no matter where it >>> implement): >>> 1. cpu_online_map; >>> 2. cpu_present_map; >>> 3. cpu_core_mask; >>> 4. cpu_sibling_mask; >>> all of them are *dynamic*, especially, 3/4 are varied per cpu and per >>> online/offline ops. >> >> Afaict all of these can be reconstructed using (mostly sysctl) >> hypercalls. >> >> Jan >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@xxxxxxxxxxxxx >> http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |