[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3] xentop: add support for qdisks
On Tue, 2015-03-24 at 10:24 -0600, Charles Arnold wrote: > >>> On 3/24/2015 at 09:01 AM, Ian Campbell <ian.campbell@xxxxxxxxxx> wrote: > > On Mon, 2015-03-23 at 20:55 -0600, Charles Arnold wrote: > > > >> +/* Get up to 1024 active domains */ > > > > What I meant last time was, is this limitation a concern? What if there > > are 1025 domains? > > This is actually a limit I picked up from libxl. See libxl.c > libxl_list_domain() > which is consumed in several places. I guess I assumed this was some official > limit. This version of the patch would simply ignore any VMs beyond 1024. Hrm, that libxl limit is also somewhat unfortunate :-(. In theory xc_domain_getinfolist can be used a bit more flexibly to get slices of domains, although getting a consistent snapshot might be tricky. One to fix for sure though. > > If this is a concern then perhaps refactor such that the qdisk stats > > gathering collection can happen from the inner loop of xenstat_get_node, > > i.e. near the call to domain_get_tmem_stats, then you would be given the > > single domain of interest. Might simplify some other stuff too. > > This would certainly eliminate the 1024 limit. This may become important > for running xentop in batch mode where the output can be captured. For > normal screen viewing, I doubt anyone has a screen with more that 1024 > lines on which to view the output :) :-). I'm not sure if there is anyone out there who uses libxenstat directly for other purposes, but I suppose it isn't impossible. > I'll code up another version with your suggestion. Thanks, given the libxl limit I'm wondering about just taking v3 of this patch and taking what would otherwise have been v4 as an improvement. Ian, Wei, any thoughts? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |