[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Platform QoS design discussion



> -----Original Message-----
> From: Xu, Dongxiao
> Sent: Monday, May 26, 2014 8:52 AM
> To: George Dunlap; Jan Beulich
> Cc: Andrew Cooper; Ian Campbell; xen-devel@xxxxxxxxxxxxx
> Subject: RE: [Xen-devel] Xen Platform QoS design discussion
> 
> > -----Original Message-----
> > From: George Dunlap [mailto:george.dunlap@xxxxxxxxxxxxx]
> > Sent: Thursday, May 22, 2014 5:27 PM
> > To: Jan Beulich; Xu, Dongxiao
> > Cc: Andrew Cooper; Ian Campbell; xen-devel@xxxxxxxxxxxxx
> > Subject: Re: [Xen-devel] Xen Platform QoS design discussion
> >
> > On 05/22/2014 09:39 AM, Jan Beulich wrote:
> > >>>> On 22.05.14 at 10:19, <dongxiao.xu@xxxxxxxxx> wrote:
> > >>> From: xen-devel-bounces@xxxxxxxxxxxxx
> > >>> And without seeing the need for any advanced access mechanism,
> > >>> I'm continuing to try to promote D - implement simple, policy free
> > >>> (platform or sysctl) hypercalls providing MSR access to the tool stack
> > >>> (along the lines of the msr.ko Linux kernel driver).
> > >> Do you mean some hypercall implementation like following:
> > >> In this case, Dom0 toolstack actually queries the real physical CPU MSRs.
> > >>
> > >> struct xen_sysctl_accessmsr      accessmsr
> > >> {
> > >>      unsigned int cpu;
> > >>      unsigned int msr;
> > >>      unsigned long value;
> > >> }
> > >>
> > >> do_sysctl () {
> > >> ...
> > >> case XEN_SYSCTL_accessmsr:
> > >>      /* store the msr value in accessmsr.value */
> > >>      on_selected_cpus(cpumask_of(cpu), read_msr, &(op->u.accessmsr),
> > 1);
> > >> }
> > > Yes, along those lines, albeit slightly more sophisticated based on
> > > the specific kind of operations needed for e.g. QoS (Andrew had
> > > some comments to the effect that simple read and write operations
> > > alone may not suffice).
> >
> > That sounds nice and clean, and hopefully would be flexible enough to do
> > stuff in the future.
> >
> > But fundamentally that doesn't address Andrew's concerns that if callers
> > are going to make repeated calls into libxl for each domain, this won't
> > scale.
> >
> > On the other hand, there may be an argument for saying, "We'll optimize
> > that if we find it's a problem."
> >
> > Dongxiao, is this functionality implemented for KVM yet?  Do you know
> > how they're doing it?
> 
> No, KVM CQM is not enabled yet. :(

I think Jan's opinion here is similar to what I proposed in the beginning of 
this thread.
The only difference is that, Jan prefers to get the CQM data per-socket and 
per-domain with data copying, while I proposed to get the CQM data per-domain 
for all sockets that can reduce the amount of hypercalls.

Stakeholders, please provide your suggestion, whether the hypercall is designed 
to get the data per-socket and per-domain, or per-domain for all sockets. 
Do you think whether it is better to implement such a version of patch based on 
this idea? I am okay to implement either. :)

Thanks,
Dongxiao

> 
> Thanks,
> Dongxiao
> 
> >
> >   -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.