[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] memory: introduce an option to force onlining of hotplug memory

On 23.07.20 15:59, Roger Pau Monné wrote:
On Thu, Jul 23, 2020 at 03:22:49PM +0200, David Hildenbrand wrote:
On 23.07.20 14:23, Roger Pau Monné wrote:
On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
On 23.07.20 10:45, Roger Pau Monne wrote:
Add an extra option to add_memory_resource that overrides the memory
hotplug online behavior in order to force onlining of memory from
add_memory_resource unconditionally.

This is required for the Xen balloon driver, that must run the
online page callback in order to correctly process the newly added
memory region, note this is an unpopulated region that is used by Linux
to either hotplug RAM or to map foreign pages from other domains, and
hence memory hotplug when running on Xen can be used even without the
user explicitly requesting it, as part of the normal operations of the
OS when attempting to map memory from a different domain.

Setting a different default value of memhp_default_online_type when
attaching the balloon driver is not a robust solution, as the user (or
distro init scripts) could still change it and thus break the Xen
balloon driver.

I think we discussed this a couple of times before (even triggered by my
request), and this is responsibility of user space to configure. Usually
distros have udev rules to online memory automatically. Especially, user
space should eb able to configure *how* to online memory.

Note (as per the commit message) that in the specific case I'm
referring to the memory hotplugged by the Xen balloon driver will be
an unpopulated range to be used internally by certain Xen subsystems,
like the xen-blkback or the privcmd drivers. The addition of such
blocks of (unpopulated) memory can happen without the user explicitly
requesting it, and hence not even aware such hotplug process is taking
place. To be clear: no actual RAM will be added to the system.

Okay, but there is also the case where XEN will actually hotplug memory
using this same handler IIRC (at least I've read papers about it). Both
are using the same handler, correct?

Yes, it's used by this dual purpose, which I have to admit I don't
like that much either.

One set of pages should be clearly used for RAM memory hotplug, and
the other to map foreign pages that are not related to memory hotplug,
it's just that we happen to need a physical region with backing struct

It's the admin/distro responsibility to configure this properly. In case
this doesn't happen (or as you say, users change it), bad luck.

E.g., virtio-mem takes care to not add more memory in case it is not
getting onlined. I remember hyper-v has similar code to at least wait a
bit for memory to get onlined.

I don't think VirtIO or Hyper-V use the hotplug system in the same way
as Xen, as said this is done to add unpopulated memory regions that
will be used to map foreign memory (from other domains) by Xen drivers
on the system.

Indeed, if the memory is never exposed to the buddy (and all you need is
struct pages +  a kernel virtual mapping), I wonder if
memremap/ZONE_DEVICE is what you want?

I'm certainly not familiar with the Linux memory subsystem, but if
that gets us a backing struct page and a kernel mapping then I would
say yes.

Then you won't have user-visible
memory blocks created with unclear online semantics, partially involving
the buddy.

Seems like a fine solution.

Juergen: would you be OK to use a separate page-list for
alloc_xenballooned_pages on HVM/PVH using the logic described by

I guess I would leave PV as-is, since it already has this reserved
region to map foreign pages.

I would really like a common solution, especially as it would enable
pv driver domains to use that feature, too.

And finding a region for this memory zone in PVH dom0 should be common
with PV dom0 after all. We don't want to collide with either PCI space
or hotplug memory.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.