[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] rump kernels running on the Xen hypervisor



On 21.8.2013 16:41, Ian Jackson wrote:
Antti Kantee writes ("Re: [Xen-users] rump kernels running on the Xen 
hypervisor"):
So, to answer your question, applications do not need to be explicitly
written to use rump kernels, but all of the interfaces used by
applications need to of course be provided somehow.  [...]

This is all very exciting and as Ian Jackson says very similar to
something I've been working on.  I started from the other end and it
may be that I can combine what I've done so far with what you've done.

It would be great to find immediate synergies. Can you be more specific about what you've been working on?

I compiled up your example, against Xen 4.4-unstable, and I'm afraid
it doesn't work for me.  Console log below.  Do you have any
suggestions for debugging it or should I just plunge in ?

Did you test this on i386 or should I rebuild as amd64 ?

I'm testing i386 dom0+domU with Xen 4.2.2. But I think we should make all of them work eventually, so might as well start now.

I fixed/pushed one use-after-free which stood out.

If the above wasn't it ... I'm not sure I can teach anything about debugging Xen guests on these lists. I've been using simple gdbsx. Additionally, "l *0xEIP" in gdb has been quite effective for debugging crashes even without gdbsx -- the rump kernel bits are quite well tested and everything outside of it is so simple that it's usually easy to just guess what's going wrong. For debugging, everything is built with symbols, so you can dive right in.

I think this approach will be an excellent one for stub domains such
as qemu, proposed stub-pygrub, etc.

But thinking about what you've done, I think we probably want to do
something a bit different with block and networking.

Does the NetBSD VFS have
  - tmpfs on variable-sized ramdisk
  - romfs based on a cpio archive
or the like ?  Producing an ffs image for something like a qemu-dm is
going to be annoying.

You can create a FFS image with the portable makefs userspace utility and even edit the contents with the equally userspace fs-utils.

Though, I'm not sure what qemu-dm is or why it needs an FFS image. For me, this is a bit like reading the proverbial math book where they leave 20 intermediate steps out of a proof because they're considered "obvious" ;)

And often networking wants to be handled by something like SOCKS
rather than by having an extra TCP stack in the stub domain.  The
reason for this is that it avoids having to allocate MAC and IP
addresses to a whole bunch of service domains; the administrator
probably wants them to piggyback on dom0's networking.

Ok, sounds like we shouldn't include a full TCP/IP stack for that use case. There's something called "sockin" for rump kernels that includes only the sockets layer but assumes the actual networking stack is elsewhere. I wrote it originally so that NFS clients in rump kernels could use host networking ... because configuring IP/MAC addresses for each NFS client wasn't attractive. Maybe sockin fits your use case too? (I'm guessing here. see: math book)

  - antti

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.