[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v5 0/8] Make balloon drivers' memory changes known to the rest of the kernel


  • To: Konstantin Khlebnikov <koct9i@xxxxxxxxx>, "Denis V. Lunev" <den@xxxxxxxxxxxxx>
  • From: Alexander Atanasov <alexander.atanasov@xxxxxxxxxxxxx>
  • Date: Wed, 19 Oct 2022 21:03:50 +0300
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=virtuozzo.com; dmarc=pass action=none header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vs+FhT6DfVlbo1rRgiKciFrQ4G+cKMyMKx+ZTDrqppg=; b=EXgBZ8QSBM4tIhZUYw8dCY1055q4vilkFG9EXugt4ErK4P+b+9Wzbk+2apb4L6dmagk/ak/4eUPzny+IvxbkEwYNPtCK/OTuP/il9GPE0L0+X5c2/VCoxnwbdX0gZce+KAn7LPYtjz3pceL9LTJ/2ZMtAP/SCfd091/AZL9AbjAxTb6ICK8dCcT8GrlnneP/SsmxqJt7DnUw5t1jZ5xFExzy87Xd/DL2AiogZ1PhxHW64cWVzbRuoTW4+y878CNDOEV5VkrN1d6NCrMc5kR0JUNsNBq0LIHBrmLx1KJLp0u8MKY0kkpLU7l1tQAuoAMpfwIPH8Dsc90MXQAdAaNRqw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AqwuEVImSTpQhPDLQMmxbw0yeXTUuqVJ0E2WAwjMtaVqZa0aUl+UFapy5XTbbspT8OYNMPhRvn6jOwVuE+QSuorlqv1bMV79ofAZzGt2rZO0sVD9aarlyZgOcLF05Rkct/d4ms6pneA75ipoJRFxfh/wOPxHSy1tViOxgUuMkJGBL/WcBgKskteL6hBDOtB0OigckNuqLSDBhLdCkmUGnptLrLGPrnsztUkXAitIaTbrOpYjjFcG4uiRsAg2Zs+4/Pb/zaS5dVMl95pdUGky4EIwKX4KzUwNwpV1D4IDHoH19Opsv3oYlit8RVOP0xhMNXTOwshCypVz8raee4XvmQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=virtuozzo.com;
  • Cc: kernel@xxxxxxxxxx, kernel test robot <lkp@xxxxxxxxx>, "Michael S . Tsirkin" <mst@xxxxxxxxxx>, David Hildenbrand <david@xxxxxxxxxx>, Wei Liu <wei.liu@xxxxxxxxxx>, Nadav Amit <namit@xxxxxxxxxx>, pv-drivers@xxxxxxxxxx, Jason Wang <jasowang@xxxxxxxxxx>, virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx, "K. Y. Srinivasan" <kys@xxxxxxxxxxxxx>, Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>, Stephen Hemminger <sthemmin@xxxxxxxxxxxxx>, Dexuan Cui <decui@xxxxxxxxxxxxx>, linux-hyperv@xxxxxxxxxxxxxxx, Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 19 Oct 2022 18:04:15 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 19.10.22 18:39, Konstantin Khlebnikov wrote:


On Wed, 19 Oct 2022 at 14:06, Denis V. Lunev <den@xxxxxxxxxxxxx <mailto:den@xxxxxxxxxxxxx>> wrote:

    On 10/19/22 12:53, Konstantin Khlebnikov wrote:
     > On Wed, 19 Oct 2022 at 12:57, Alexander Atanasov
     > <alexander.atanasov@xxxxxxxxxxxxx
    <mailto:alexander.atanasov@xxxxxxxxxxxxx>> wrote:
     >
     >     Currently balloon drivers (Virtio,XEN, HyperV, VMWare, ...)
     >     inflate and deflate the guest memory size but there is no
     >     way to know how much the memory size is changed by them.
     >
     >     Make it possible for the drivers to report the values to mm core.
     >
     >     Display reported InflatedTotal and InflatedFree in /proc/meminfo
     >     and print these values on OOM and sysrq from show_mem().
     >
     >     The two values are the result of the two modes the drivers work
     >     with using adjust_managed_page_count or without.
     >
     >     In earlier versions, there was a notifier for these changes
     >     but after discussion - it is better to implement it in separate
     >     patch series. Since it came out as larger work than initially
     >     expected.
     >
     >     Amount of inflated memory can be used:
     >      - totalram_pages() users working with drivers not using
     >         adjust_managed_page_count
     >      - si_meminfo(..) users can improve calculations
     >      - by userspace software that monitors memory pressure
     >
     >
     > Sorry, I see no reason for that series.
     > Balloon inflation adjusts totalram_pages. That's enough.
     >
    no, they are not at least under some circumstances, f.e.
    virtio balloon does not do that with VIRTIO_BALLOON_F_DEFLATE_ON_OOM
    set


     > There is no reason to know the amount of non-existent ballooned
    memory
     > inside.
     > Management software which works outside should care about that.
     >
    The problem comes at the moment when we are running
    our Linux server inside virtual machine and the customer
    comes with crazy questions "where our memory?".


Ok. In this case balloon management is partially inside VM.
I.e. we could report portion of balloon as potentially available memory.

I guess memory pressure could deflate balloon till some threshold set by external hypervisor. So, without knowledge about this threshold there is no correct answer about size of available memory.
Showing just size of balloon doesn't gives much.

You need the current and the adjustment to get the absolute top.
If you check only totalram_pages() it is the current. To get the absolute maximum you need to know how much the balloon adjusted it.

The drivers that do not adjust totalram_pages() and leave the inflated memory as used assume that this memory can be reclaimed at anytime. But that assumption is not completely true and provides the system with false totalram value. Why - VMWare does not have oom_notifier at all (it is possible to have sone other mechanism, i do not know), Virtio balloon reclaims 1MB on OOM _if_ it can.

--
Regards,
Alexander Atanasov




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.