[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xl: insufficient ballooning when starting certain guests
Sorry for the late reply. This fell through the crack. On Tue, Sep 22, 2015 at 03:41:47AM -0600, Jan Beulich wrote: > Tools maintainers, > > it looks as if changes in the memory requirements of the hypervisor > have pushed things over the border of not working anymore when > passing through a device and not sharing page tables (between VT-d > and EPT, or unconditionally on AMD). Since this has always been a I think this is referring to a system-wide option, which presumably can be tuned by setting "sharept" command line option? > latent issue (as it's quite obvious that two sets of page tables require > more memory than a single set) it should imo be fixed. The main issue > (afaics) here is that the information about whether sharing is in use > is not currently available to the tools (ignoring the awkward option of > parsing the hypervisor log). Therefore the question is - should we > extend XEN_SYSCTL_physinfo accordingly, or are there other > suitable means to communicate such information which I'm not > (immediately) aware of? > I'm not aware of other channel at the moment but I don't think physinfo is the right place. Furthermore, with the understanding that you hope toolstack to set the limit properly (with SHADOW_OP_SET_ALLOCATION), passing back one bit of information doesn't help toolstack determine how much extra memory it needs. We might as well blindly increase the slack value in libxl (which is a bad idea IMHO). Is there a way to pre-determine how much extra memory is needed for the non-shared case? If so, can it be done in toolstack alone? What does the hypervisor do when PTs are shared but it still hits the boundary? I presume the current default value never triggers such situation? Wei. > Thanks, Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |