[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: Configuring rumprun-xen application stacks from Xenstore



On 13/11/14 15:46, Martin Lucina wrote:
Following is the high-level description from the Git commit:

Can you also post an example of the usage of your CLI tool?  Actually,
can you post a rough description of the entire process that a user would
have to follow, i.e. compile, configure, run.

Running "xr" with no parameters gives a nice command reference :-)

Sure, but asking everyone to fetch/build/run a tool and dig into the sources to be able to partake in mailing list brainstorming is excessive.

Anyhow:

Anyhow, your list of steps was helpful.  Thank you.

Running a webserver:

Get mathopd source from http://mathopd.org/. I used mathopd as it's BSD
licensed, non-forking and small. To build and run it:

Plus I don't think anyone had tried running mathopd on top of a rump kernel before. It's always good to know that more stuff just works out of the box.

4. You need filesystem images for a stub /etc and /data. I am using cd9660
for these as you can portably generate that anywhere you have genisoimage
(ex mkisofs). (see below)

Just to expand on the need for /etc, libc calls such as getservent() and getpwent() access files from /etc and get desperately confused if such files don't exist. The alternative is to modify the application to not use such calls or to modify libc to shortcircuit them instead of trying to access /etc. Not sure what is best long-term, but short-term an easily available or generatable /etc is probably the best way to fly.

Actually, hmm, there's already an image for /etc available. I understood from irc discussion that you needed slight modifications to passwd for mathopd. In case your changes don't conflict with what's already up there, you could just update the existing downloadable /etc image:
https://github.com/rumpkernel/rumprun-xen/tree/master/img

Of course we can't generate the content image for others, but understanding what files need to go in there is straightforward.

5. Assuming you have those, run the following in the mathopd src directory,
as root:

  # xr run -i -n inet:dhcp -b etc.iso:/etc -b data.iso:/data mathopd -nt -f 
/etc/mathopd.conf

Did you try running more than one? You should just be able to run as many domU's as you want and serve different content from each by altering -b, correct?

Is deconfig necessary?  The rump kernel already automatically e.g.
unmounts file systems and releases the dhcp lease when it's halted.

It does unmount filesystems (if halted correctly) but afaict it does not do
rump_pub_etfs_remove() and the dhcp stuff does not destroy the interface.
This is nitpicking, but if you don't do that then the underlying
blkfront/netfront does not get "correctly" detached either as you can see
from "port X still bound!" messages during minios_stop_kernel().

Ok, so it's a quick workaround. I should fix those in the rump kernel now that I'm aware of them.

Guess so. In my mind there is potentially more the tool can do than just
run rumprun stacks, for example:

  - manage interaction with the host networking, map host ports to
    domain:port
  - generate or otherwise manage filesystem images (eg we could have a
    custom DNS server)
  - manage stack naming on the host, this is a bit daft at the moment, eg.
    if you try to run two copies of mathopd it will fall over due to the Xen
    domain name not being unique

And so on. Maybe this can be layered into separate tools with the 'rumprun'
script dealing only with launching. Needs more thought.

Agreed.  But we should still try to get the foreseeable stuff right.

Note that in this initial version, only configuring IPv4 network
interfaces with DHCP is supported, and only using image files with ffs
or cd9660 filesystems for block devices is supported.

Would e.g. IPv6 support take longer than it took to write that paragraph ?-)

<taptaptap> ;)

My paragraph was shorter ;)

  - antti

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.