[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen Platform QoS design discussion
>>> On 19.05.14 at 13:28, <George.Dunlap@xxxxxxxxxxxxx> wrote: > But in reality, all we need the daemon for is a place to store the > information to query. The idea we came up with was to allocate memory > *inside the hypervisor* to store the information. The idea is that > we'd have a sysctl to prompt Xen to *collect* the data into some > memory buffers inside of Xen, and then a domctl that would allow you > query the data on a per-domain basis. > > That should be a good balance -- it's not quite as good as having as > separate daemon, but it's a pretty good compromise. Which all leaves aside the suggested alternative of making available a couple of simple operations allowing an eventual daemon to do the MSR accesses without the hypervisor being concerned about where to store the data and how to make it accessible to the consumer. > There are a couple of options regarding collecting the data. One is > to simply require the caller to do a "poll" sysctl every time they > want to refresh the data. Another possibility would be to have a > sysctl "freshness" knob: you could say, "Please make sure the data is > no more than 1000ms old"; Xen could then automatically do a refresh > when necessary. > > The advantage of the "poll" method is that you could get a consistent > snapshot across all domains; but you'd have to add in code to do the > refresh. (An xl command querying an individual domain would > undoubtedly end up calling the poll on each execution, for instance.) > > An advantage of the "freshness" knob, on the other hand, is that you > automatically get coalescing without having to do anything special > with the interface. With the clear disadvantage of potentially doing work the results of which is never going to be looked at by anyone. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |