[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [MirageOS-devel] Cleaning up the Mini-OS namespace; coordinating development
On 10 Nov 2014, at 11:37, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote: > > On 10 Nov 2014, at 10:29, Martin Lucina <martin@xxxxxxxxxx> wrote: >> >> (Cross-posting to rumpkernel-users at Antti's request) >> >> Hi, >> >> I've been working on cleaning up the Mini-OS namespace so that we can build >> arbitrary unmodified software with rumprun-xen without running into >> namespace conflicts. >> >> Given that MirageOS also uses Mini-OS for a similar purpose ("firmware >> layer" of a standalone Xen stack for building applications), you may be >> interested in picking these changes, and we could coordinate further >> development of Mini-OS as far as is practical. >> >> Currently this work is more or less complete and available for review in >> the "wip-xenos" branch of my Github repository: >> >> https://github.com/mato/rumprun-xen/tree/wip-xenos >> >> The original discussion on the rumpkernel-users list can be found here: >> >> http://thread.gmane.org/gmane.comp.rumpkernel.user/514 > > This makes sense to me. We use upstream MiniOS now (with some patches to > support installation as a library), so we don't need to do anything > special to benefit from your patches if you submit them upstream. Have > you sent an RFC to xen-devel to get other people's reaction to this? I > can't imagine it'll be too contentious if other uses (such as the qemu > stub domain) are also fixed up to support this. Incidentally, it might help to regenerate your patch stream over the https://github.com/mirage/xen repository. This is a direct mirror of the upstream Xen trees, and we base our Mirage-specific MiniOS off this tree. Also CCing Adam Wick from HalVM as this might help him. -anil _______________________________________________ MirageOS-devel mailing list MirageOS-devel@xxxxxxxxxxxxxxxxxxxx http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |