[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v2 00/12] get rid of GFP_ZONE_TABLE/BAD
On Tue, May 22, 2018 at 08:37:28PM +0200, Michal Hocko wrote: > So why is this any better than the current code. Sure I am not a great > fan of GFP_ZONE_TABLE because of how it is incomprehensible but this > doesn't look too much better, yet we are losing a check for incompatible > gfp flags. The diffstat looks really sound but then you just look and > see that the large part is the comment that at least explained the gfp > zone modifiers somehow and the debugging code. So what is the selling > point? I have a plan, but it's not exactly fully-formed yet. One of the big problems we have today is that we have a lot of users who have constraints on the physical memory they want to allocate, but we have very limited abilities to provide them with what they're asking for. The various different ZONEs have different meanings on different architectures and are generally a mess. If we had eight ZONEs, we could offer: ZONE_16M // 24 bit ZONE_256M // 28 bit ZONE_LOWMEM // CONFIG_32BIT only ZONE_4G // 32 bit ZONE_64G // 36 bit ZONE_1T // 40 bit ZONE_ALL // everything larger ZONE_MOVABLE // movable allocations; no physical address guarantees #ifdef CONFIG_64BIT #define ZONE_NORMAL ZONE_ALL #else #define ZONE_NORMAL ZONE_LOWMEM #endif This would cover most driver DMA mask allocations; we could tweak the offered zones based on analysis of what people need. #define GFP_HIGHUSER (GFP_USER | ZONE_ALL) #define GFP_HIGHUSER_MOVABLE (GFP_USER | ZONE_MOVABLE) One other thing I want to see is that fallback from zones happens from highest to lowest normally (ie if you fail to allocate in 1T, then you try to allocate from 64G), but movable allocations hapen from lowest to highest. So ZONE_16M ends up full of page cache pages which are readily evictable for the rare occasions when we need to allocate memory below 16MB. I'm sure there are lots of good reasons why this won't work, which is why I've been hesitant to propose it before now. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |