[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Memory Accounting] was: Re: PVH dom0 creation fails - the system freezes



On Thu, Jul 26, 2018 at 12:07 AM, Boris Ostrovsky
<boris.ostrovsky@xxxxxxxxxx> wrote:
> On 07/25/2018 02:56 PM, Andrew Cooper wrote:
>> On 25/07/18 17:29, Juergen Gross wrote:
>>> On 25/07/18 18:12, Roger Pau Monné wrote:
>>>> On Wed, Jul 25, 2018 at 05:05:35PM +0300, bercarug@xxxxxxxxxx wrote:
>>>>> On 07/25/2018 05:02 PM, Wei Liu wrote:
>>>>>> On Wed, Jul 25, 2018 at 03:41:11PM +0200, Juergen Gross wrote:
>>>>>>> On 25/07/18 15:35, Roger Pau Monné wrote:
>>>>>>>>> What could be causing the available memory loss problem?
>>>>>>>> That seems to be Linux aggressively ballooning out memory, you go from
>>>>>>>> 7129M total memory to 246M. Are you creating a lot of domains?
>>>>>>> This might be related to the tools thinking dom0 is a PV domain.
>>>>>> Good point.
>>>>>>
>>>>>> In that case, xenstore-ls -fp would also be useful. The output should
>>>>>> show the balloon target for Dom0.
>>>>>>
>>>>>> You can also try to set the autoballoon to off in /etc/xen/xl.cfg to see
>>>>>> if it makes any difference.
>>>>>>
>>>>>> Wei.
>>>>> Also tried setting autoballooning off, but it had no effect.
>>>> This is a Linux/libxl issue that I'm not sure what's the best way to
>>>> solve. Linux has the following 'workaround' in the balloon driver:
>>>>
>>>> err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
>>>>                &static_max);
>>>> if (err != 1)
>>>>     static_max = new_target;
>>>> else
>>>>     static_max >>= PAGE_SHIFT - 10;
>>>> target_diff = xen_pv_domain() ? 0
>>>>             : static_max - balloon_stats.target_pages;
>>> Hmm, shouldn't PVH behave the same way as PV here? I don't think
>>> there is memory missing for PVH, opposed to HVM's firmware memory.
>>>
>>> Adding Boris for a second opinion.
>
> (Notwithstanding Andrews' rant below ;-))
>
> I am trying to remember --- what memory were we trying not to online for
> HVM here?

My general memory of the situation is this:

* Balloon drivers are told to reach a "target" value for max_pages.
* max_pages includes all memory assigned to the guest, including video
ram, "special" pages, ipxe ROMs, bios ROMs from passed-through
devices, and so on.
* Unfortunately, the balloon driver doesn't know what their max_pages
value is and can't read it.
* So what the balloon drivers do at the moment (as I understand it) is
look at the memory *reported as RAM*, and do a calculation:
  visible_ram - target_max_pages = pages_in_balloon

You can probably see why this won't work -- the result is that the
guest balloons down to (target_max_pages + non_ram_pages).  This is
kind of messy for normal guests, but when you have a
populate-on-demand guest, that leaves non_ram_pages amount of PoD ram
in the guest.  The hypervisor then spends a huge amount of work
swapping the PoD pages around under the guest's feet, until it can't
find any more zeroed guest pages to use, and it crashes the guest.

The kludge we have right now is to make up a number for HVM guests
which is slightly larger than non_ram_pages, and tell the guest to aim
for *that* instead.

I think what we need is for the *toolstack* to calculate the size of
the balloon rather than the guest, and tell the balloon driver how big
to make its balloon, rather than the balloon driver trying to figure
that out on its own.

We also need to get a handle on making the allocation and tracking of
all the random "non-RAM" pages allocated to a guest; but that's a
slightly different region of the swamp.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.