[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] memory: introduce an option to force onlining of hotplug memory



On 23.07.20 15:47, David Hildenbrand wrote:
On 23.07.20 15:22, David Hildenbrand wrote:
On 23.07.20 14:23, Roger Pau Monné wrote:
On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
On 23.07.20 10:45, Roger Pau Monne wrote:
Add an extra option to add_memory_resource that overrides the memory
hotplug online behavior in order to force onlining of memory from
add_memory_resource unconditionally.

This is required for the Xen balloon driver, that must run the
online page callback in order to correctly process the newly added
memory region, note this is an unpopulated region that is used by Linux
to either hotplug RAM or to map foreign pages from other domains, and
hence memory hotplug when running on Xen can be used even without the
user explicitly requesting it, as part of the normal operations of the
OS when attempting to map memory from a different domain.

Setting a different default value of memhp_default_online_type when
attaching the balloon driver is not a robust solution, as the user (or
distro init scripts) could still change it and thus break the Xen
balloon driver.

I think we discussed this a couple of times before (even triggered by my
request), and this is responsibility of user space to configure. Usually
distros have udev rules to online memory automatically. Especially, user
space should eb able to configure *how* to online memory.

Note (as per the commit message) that in the specific case I'm
referring to the memory hotplugged by the Xen balloon driver will be
an unpopulated range to be used internally by certain Xen subsystems,
like the xen-blkback or the privcmd drivers. The addition of such
blocks of (unpopulated) memory can happen without the user explicitly
requesting it, and hence not even aware such hotplug process is taking
place. To be clear: no actual RAM will be added to the system.

Okay, but there is also the case where XEN will actually hotplug memory
using this same handler IIRC (at least I've read papers about it). Both
are using the same handler, correct?


It's the admin/distro responsibility to configure this properly. In case
this doesn't happen (or as you say, users change it), bad luck.

E.g., virtio-mem takes care to not add more memory in case it is not
getting onlined. I remember hyper-v has similar code to at least wait a
bit for memory to get onlined.

I don't think VirtIO or Hyper-V use the hotplug system in the same way
as Xen, as said this is done to add unpopulated memory regions that
will be used to map foreign memory (from other domains) by Xen drivers
on the system.

Indeed, if the memory is never exposed to the buddy (and all you need is
struct pages +  a kernel virtual mapping), I wonder if
memremap/ZONE_DEVICE is what you want? Then you won't have user-visible
memory blocks created with unclear online semantics, partially involving
the buddy.

And just a note that there is also DCSS on s390x / z/VM which allows to
map segments into the VM physical address space (e.g., you can share
segments between VMs). They don't need any memmap (struct page) for that
memory, though. All they do is create the identity mapping in the kernel
virtual address space manually. Not sure what the exact requirements on
the XEN side are. I assume you need a memmap for this memory.

We need to be able to do I/O with that memory via normal drivers and we
need to be able to map it, both from user land and from the kernel.


Juergen



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.