[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Platform QoS design discussion



> -----Original Message-----
> From: Jan Beulich [mailto:jbeulich@xxxxxxxx]
> Sent: Friday, May 30, 2014 7:18 PM
> To: ian.campbell@xxxxxxxxxx
> Cc: andrew.cooper3@xxxxxxxxxx; george.dunlap@xxxxxxxxxxxxx; Xu, Dongxiao;
> Nakajima, Jun; Auld, Will; xen-devel@xxxxxxxxxxxxx
> Subject: Re: RE: RE: [Xen-devel] Xen Platform QoS design discussion
> 
> >>> Ian Campbell <ian.campbell@xxxxxxxxxx> 05/30/14 11:11 AM >>>
> >On Thu, 2014-05-29 at 10:11 +0100, Jan Beulich wrote:
> >> >>> "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> 05/29/14 9:31 AM >>>
> >> >Okay. If I understand correctly, you prefer to implement a pure MSR access
> >> >hypercall for one CPU, and put all other CQM things in libxc/libxl layer.
> >>
> >> >In this case, if libvert/XenAPI is trying to query a domain's cache 
> >> >utilization
> >> >in the system (say 2 sockets), then it will trigger _two_ such MSR access
> >> >hypercalls for CPUs in the 2 different sockets.
> >> >If you are okay with this idea, I am going to implement it.
> >>
> >> I am okay with it, but give it a couple of days before you start so that 
> >> others
> >> can voice their opinions too.
> >
> >Dom0 may not have a vcpu which is scheduled/schedulable on every socket.
> >scheduled it can probably deal with by doing awful sounding temporary
> >things to its affinity mask, but if it is not schedulable (e.g. due to
> >cpupools etc) then that sounds even harder to sort...
> 
> But that's why we're intending to add a helper hypercall in the first place. 
> This
> isn't intended to be a 'read MSR' one, but a 'read MSR in this CPU'.

No more comments on this MSR access hypercall design now, so I assume people 
are mostly okay with it?

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.