[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] mini-guest io emulation

Nakajima, Jun wrote:
For the "mini guest", I think it could be much easier if we
substantially strip down xenlinux rather than adding (eventually) a lot
of stuff to the current mini-os, mainly because we need probably a
multi-threaded run-time environment, scheduler, memory allocator, event
handling, drivers such as xenbus/netfront/blkfront, etc. At least, I
think we can use xenlinux as the development platform. For example,
implement the qemu-dm as a driver adding the infrastructure required
(e.g. small in-kernel glibc).
Once you get past vl.c, qemu-dm has very little reliance on glibc functions. Since we're only trying to do hardware emulation here, I'd expect that vl.c would not be included.

I suspect stripping down Linux is going to prove harder in the long run. As Jacob mentioned, you only really need a simple page allocator. The only reasons I can think of to use threads is XenBus (threads shouldn't be required to implement it) and asynchronous IO.

I think an interesting alternative for AIO would actually be to create another VCPU specifically for the mini-os code to run in. The physically analogy is sane and if you truly do need more parallelism you can always just use two VCPUs.


Anthony Liguori
Once the above is working we'll be in good shape. We can remove all
the skany qemu-dm support from the tools as from their POV paravirt
and hvm guests will look identical. It should also be easy to
implement save/restore of hvm guests -- just save the miniguest as
part of the hvm guests', memory image. The next stage would then be
to improve performance by enhancing the device models, e.g. adding a
network card that suports jumbo frames and csum offload, and requires
fewer vmexits in operation.

How best to move forward on this? Any volunteers?


Intel Open Source Technology Center

Xen-devel mailing list

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.