[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/pod: Do not fragment PoD memory allocations

On 24.01.2021 05:47, Elliott Mitchell wrote:
> Previously p2m_pod_set_cache_target() would fall back to allocating 4KB
> pages if 2MB pages ran out.  This is counterproductive since it suggests
> severe memory pressure and is likely a precursor to a memory exhaustion
> panic.  As such don't try to fill requests for 2MB pages from 4KB pages
> if 2MB pages run out.

I disagree - there may be ample 4k pages available, yet no 2Mb
ones at all. I only agree that this _may_ be counterproductive
_if indeed_ the system is short on memory.

> Signed-off-by: Elliott Mitchell <ehem+xen@xxxxxxx>
> ---
> Changes in v2:
> - Include the obvious removal of the goto target.  Always realize you're
>   at the wrong place when you press "send".

Please could you also label the submission then accordingly? I
got puzzled by two identically titled messages side by side,
until I noticed the difference.

> I'm not including a separate cover message since this is a single hunk.
> This really needs some checking in `xl`.  If one has a domain which
> sometimes gets started on different hosts and is sometimes modified with
> slightly differing settings, one can run into trouble.
> In this case most of the time the particular domain is most often used
> PV/PVH, but every so often is used as a template for HVM.  Starting it
> HVM will trigger PoD mode.  If it is started on a machine with less
> memory than others, PoD may well exhaust all memory and then trigger a
> panic.
> `xl` should likely fail HVM domain creation when the maximum memory
> exceeds available memory (never mind total memory).

I don't think so, no - it's the purpose of PoD to allow starting
a guest despite there not being enough memory available to
satisfy its "max", as such guests are expected to balloon down
immediately, rather than triggering an oom condition.

> For example try a domain with the following settings:
> memory = 8192
> maxmem = 2147483648
> If type is PV or PVH, it will likely boot successfully.  Change type to
> HVM and unless your hardware budget is impressive, Xen will soon panic.

Xen will panic? That would need fixing if so. Also I'd consider
an excessively high maxmem (compared to memory) a configuration
error. According to my experiments long, long ago I seem to
recall that a factor beyond 32 is almost never going to lead to
anything good, irrespective of guest type. (But as said, badness
here should be restricted to the guest; Xen itself should limp
on fine.)

> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -212,16 +212,13 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, 
> unsigned long pod_target, int p
>              order = PAGE_ORDER_2M;
>          else
>              order = PAGE_ORDER_4K;
> -    retry:
>          page = alloc_domheap_pages(d, order, 0);
>          if ( unlikely(page == NULL) )
>          {
> -            if ( order == PAGE_ORDER_2M )
> -            {
> -                /* If we can't allocate a superpage, try singleton pages */
> -                order = PAGE_ORDER_4K;
> -                goto retry;
> -            }
> +            /* Superpages allocation failures likely indicate severe memory
> +            ** pressure.  Continuing to try to fulfill attempts using 4KB 
> pages
> +            ** is likely to exhaust memory and trigger a panic.  As such it 
> is
> +            ** NOT worth trying to use 4KB pages to fulfill 2MB page 
> requests.*/

Just in case my arguments against this change get overridden:
This comment is malformed - please see ./CODING_STYLE.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.