[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 1/6] x86: detect and initialize Cache QoS Monitoring feature

>>> On 18.03.14 at 03:02, "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >>> On 03.03.14 at 14:21, "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >> >>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@xxxxxxxxx> wrote:
>> >> > +    /* Allocate CQM buffer size in initialization stage */
>> >> > +    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
>> >> > +                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
>> >>
>> >> Does this really need to be NR_CPUS (rather than nr_cpu_ids)?
>> >
>> > Okay.
>> > As you mentioned in later comment, the CQM data is indexed per-socket.
>> > Here we use NR_CPUS or nr_cpu_ids because it is big enough to cover the
>> > possible socket number in the system (even consider hotplug case).
>> > Is there a better way that we can get the system socket number (including
>> > the case even there is no CPU in that socket)?
>> I think we should at least get the estimation as close as possible:
>> Count the sockets that we know of (i.e. that have at least one
>> core/thread) and add the number of "disabled" (hot-pluggable)
>> CPUs if ACPI doesn't surface enough information to associate
>> them with a socket (but I think MADT provides all the needed data).
> It seems that MADT table doesn't contain the socket number information...
> Considering that it is difficult to get the accurate socket number at system 
> initialization time, what about we allocate/free the CQM related memory at 
> runtime when user admin really issues the QoS query command?
> With this approach, the data sharing between Xen and Dom0 tools should be 
> much less since we know:
>  - How many processor sockets are active in the system.
>  - How many active RMIDs are in use in the system.
> With above, we didn't need to always using the max_rmid * max_socket memory 
> to share a lot of unnecessary "zero" data, and the new sharing data should be 
> less than one page.
> What's your opinion about that?

If you can get this to work, that would seem like a pretty optimal


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.