[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Xen/ia64 to-do list
I've had a few requests of the sort "If I had some time to help out with Xen/ia64, what things need to be done?" so I put together the following preliminary to-do list over the weekend. I'm undoubtedly missing some things... this is just a first draft. Happy holidays to all! Dan ================ Xen/ia64 to-do list Rev 041220 - Write a user-mode privop substitution program for domain0. (Top of my list.) - Deliver a complete set of bits so others can look at the code and try it out on other boxes. (Also at the top of my list.) - Architect/implement the hypercalls necessary to support multiple domains. (This is next on my list but I could probably use some help.) I expect this will require some cooperation with the core Xen team as the current interface is fairly x86-centric. Some interesting challenges off the top of my head: - how to do page-flipping on a variable page-size architecture where different guests may have different page sizes (and all larger than 4k) - is there an opportunity for using "fast hypercalls" modeled after Linux/ia64 fast syscalls; if so, are multicalls still necessary? - what changes are *required* in the interface to higher levels (e.g. xend). Example: multiple page sizes - Add proper generation of an offsets.h to the build process. Right now I am just using a few hardcoded constants. Note that the offsets.h needs to be "public" as it will be used with a paravirtualized Linux/ia64. - Implement "poor man's exception handling" to enable a broader range of "user" (xen->domain) access, leveraging Linux and Linux/ia64 code if at all possible. Note that some special linker magic is required so lds.S and build script changes may be needed too. Better user access will be necessary to support multicall hypercalls. - Finish implementing (long-format) VHPT support in Xen itself. Probably the biggest reason guest performance is currently so poor is the higher TLB miss ratio since the hardware walker always misses. I have some code for this somewhere... :-} - Related to the VHPT, I've got some ideas about improving support for non-idempotent metaphysical->physical TLB mapping that should be implemented to reduce TLB misses on (future) guests. - Implement SMP support for Xen itself. Since so much code is leveraged from Linux and Xen/x86, which both support SMP, this might be easy to turn on, but there are surely lurking races and global variables which will be fun to debug. Later... catch up with Xen/x86 by implementing SMP guests. - Integrate perfmon support into Xen so tuning can be measured. May require adding hypercalls to fetch performance data so is dependent on user access work (above). [CERN has first dibs on this if they want it.] - Work on a transparent/optimized paravirtualizion of Linux/ia64. I did this before on vBlades (albeit with 2.4.x... 2.6 may be a bit harder) and have a lot of ideas/input. The hardest part is ivt.S so detailed knowledge of the bowels of ia64 is a must. Note that the offsets.h work should be done first. - Move all the linux dependencies/patches forward to 2.6.10 when it comes out. - Fix/implement discontiguous memory support. I ran into a problem with discontiguous memory so I turned it off (and added a hack to disallow physical memory above 4GB). There's been much work in this area since 2.6.7 so this probably should wait until after the move forward to 2.6.10. - Improve "early printk" support. Leverage the new work in 2.6.10 to utilize EFI conout. Currently there is a hardcoded assembly routine for console output; this may not work on boxes other than the hp rx2600 for which it was written. For extra credit, connect the new EFI conout stuff in Linux (with a minimal patch) so that Linux early printk causes hypercalls to Xen iff it is running on Xen. - Look into building versions of Linux with front-end (and perhaps back-end) virtual device drivers using transparent paravirtualization and provide a patch to Linux/ia64 that does this. - Understand and fix the "timer tick delivered before it's due" problem. Also, is there a problem with idle that causes unexpected delay of timer ticks? Extra credit: Can the "idle domain" be eliminated entirely? - Implement and test CONFIG_IA32_SUPPORT. There is one known privop issue; the rest may turn on easily or may turn out to be hard... don't know. - See if kernel modules can be made to work with domain0 (and later with guests). Currently domain0 must be fully linked (no module support) which means, for example, a stock Red Hat kernel doesn't work. - Implement an integration of fpswa.efi (floating point software assist) into Xen/ia64. (Any security/performance isolation issues here?) - Test NaT support and fix the (probably many) bugs that arise. - Develop a patch for Linux/ia64 so it will run unchanged both on ski and on real hardware. (Xen does, but it would be nice if Xenlinux did too.) - Look into whether Xen can run directly as an EFI application to eliminate the need to use elilo as a bootloader. - Test Xen/ia64 on other distributions and generate rpm's/deb's etc, preferably with some automation. - Test Xen/ia64 with other open source OS's (e.g. NetBSD?) to find/fix bugs before they show up in some future version of Linux. - Develop a good automated Xen/ia64 regression test suite/package that is distribution-independent. - Develop a good automated Xen/ia64 performance test suite that is distribution- independent. - Help with documentation. - Help/ideas for keeping up Xen/ia64 with core Xen (a moving target). - Get a Xen/ia64 web page up (at Cambridge or HPL or ???) ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://productguide.itmanagersjournal.com/ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |