[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] A tale of three memory allocators
Hi, Dan, This is really a good writing. :) My feeling is that we could still try to merge with XEN common memory system first, and then, based on this simplified model, investigate carefully about what's the best and cleanest approach to support add-on features, like NUMA here. Yes, it would be great if XEN can run on so many different work models soon. It's really quickest way to support NUMA stuff based on existing Linux code. However sometimes re-design based on a new usage model may be more valuable than simply copying stuff for largely different one. :) VM hypervisor differs with normal OS to large extent. Linux memory management is efficient which however, contains too many redundancies for XEN. For example, Linux has to differentiate allocation request for normal memory or DMA-capable one. Instead Xen grants Dom0 (service OS) to take charge of physical devices, thus no need to know DMA related stuff. Another example is, large part of Linux memory management code is for handling user process, which has a different appearance compared to Domain. That brings with many uncertainties for later development. So to adopt XEN common code actually let us to sync with new fix/update/design which benefiting virtual machine concept quickly; however to borrow Linux code brings us complexity to maintain and catch up with change of linux. It's very likely that efforts on those finally just shows unrelated to XEN usage model. Actually our patch replaces linux stuff with XEN common memory part, including both boot-time allocator, buddy system and Rusty's simple slab allocator. If this can be merged earlier, we then get a better base to consider how to support NUMA model on XEN environment. IMO, actually XEN already leaves space for such enhancement. The buddy system is based on concept of zone, which you can also think as node. A quick way may be: (As Ian points out) 1. Define more domain ID, like: #define MEMZONE_XEN 0 #define MEMZONE_DOM 1 #define MEMZONE_NODE 4 /* say 4 node */ #define NR_ZONES MEMZONE_NODE + 2 2. Define new wrapping interfaces: struct pfn_info *alloc_node_pages(struct domain *d, unsigned int order) { Scan node_list Alloc_heap_pages(node_id, order).. ... } Maybe later it can also be enhanced to use hierarchy domain structure, if really required. Who knows? Anyway, I just throw above out as an example that it's not so difficult if we merge with xen common code first. :) Thanks, Kevin >-----Original Message----- >From: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx >[mailto:xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx] On Behalf Of Magenheimer, Dan (HP >Labs Fort Collins) >Sent: Thursday, March 17, 2005 3:09 PM >To: xen-devel@xxxxxxxxxxxxxxxxxxxxx >Subject: [Xen-devel] A tale of three memory allocators ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://ads.osdn.com/?ad_ide95&alloc_id396&op=click _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |