[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[MirageOS-devel] Unikernels on clouds? Just a mirage....



Sorry about the jokey subject line.

This is both an FYI and a request - the request to MirageOS is at the bottom of 
this wall of text.

I’ve been doing a fair amount of work to try to get all sorts of things to boot 
on some of the major cloud platforms.

For better or worse I have learned hard lessons about how things work and what 
does and does not work. I thought some folks might be interested to hear what I 
learned.

One of my goals was to get Unikernels on HVM working booting on the various 
clouds, without containers. I only considered HVM servers, no paravirtualised 
because you have no control over the kernel with paravirtualised.

Here’s the hard won knowledge, 60 seconds for you to absorb - much pain and 
time for me to gain.  Correct me if I’m wrong.

***** AWS/EC2 -
EC2 uses straightforward DHCP and is super easy to get going from a networking 
perspective. When the kernel boots up it just needs to request an address via 
DHCP and it should be good to go.
In most cases the primary hard disk name is xvda.  Some operating systems will 
name the primary disk hda.
AWS gives you screen shots of the graphical console which is somewhat helpful 
and better than being blind to the boot messages.
AWS also gives delayed access to the serial console output which is very 
helpful.
AWS runs Xen.

***** Google GCE -
This is a bit easy and a bit hard. GCE uses DHCP.  BUT - and it is a big but - 
the DHCP client MUST implement DHCP option 121 - classless static routes.  
Whilst it is “standard” behaviour, it is not common.  Without implementing 
option 121 your machine won’t get the static route that is needed for the GCE 
networking to work correctly. dhclient can do it with some configuration 
fiddling, I was not successful in getting other dhcp clients such as udhcpc to 
do option 121 properly.  You can’t hard code the static route because it 
changes depending on the GCE network that the machine came up on, so your dhcp 
client must correctly implement option 121. Well, you can hard code the static 
route but that is not a general solution.
The hard disk name on GCE is sda
GCE gives very good serial console output.  It does not give access to the 
graphical console.
GCE runs KVM.

***** Digital Ocean
Digital Ocean droplets have static IP addresses that are injected into the 
machines /etc/network/interfaces when it boots.  You can also get the IP 
address information by querying the HTTP metadata server via the link local 
address https://en.wikipedia.org/wiki/Link-local_address but this requires 
parsing of the returned data to get out the bits you need to configure the 
interface.
The hard disk name on DO is vda
The magnificent thing about Digital Ocean is that it is the only cloud provider 
that I have worked with that provides a working, interactive KVM console that 
displays all the boot messages, which makes diagnostics about a million % 
easier.
As far as I can tell it does not give serial console output.
Digital Ocean runs KVM.

***** Rackspace
Well I had it working but Rackspace cut off my developer account after 12 
months and I can’t get anyone at Rackspace to give me more access to their 
cloud, despite only ever using < $10 a month of their resources for testing 
their machines.
Boo Rackspace!  I can’t at this stage provide any useful information about how 
Rackspace networking actually works.
Rackspace hard disk name is xvda
Rackspace runs Xen.

***** Softlayer
After MUCH time and pain I found that the Softlayer cloud is so locked down 
that you literally cannot control the boot process. Actually let me qualify 
that - it MIGHT be possible to control the boot process, but since it is 
impossible to see either serial or graphical boot console then there’s just no 
way to work out why your kernel is not booting.
If anyone is interested, Softlayer does static IP addressing, with the static 
address configuration being injected into the OS at boot time via 
/etc/network/interfaces.
Soft layer gives a KVM console if you are willing to jump through the hoops of 
configuring a PPTP VPN to access the KVM console address, but it is not visible 
during critical parts of the boot process, which didn’t give me the diagnostic 
information I needed to see why things were not booting.
Softlayer hard disk name is xvda
Softlayer runs Xen.


###### SO, what does that all mean?


Well, my goal is to be able to boot HVM unikernels directly on the clouds, 
without containers.

Despite the challenges posed by the various approaches to networking, the 
solutions are actually pretty simple.

To make a unikernel run, here is what that unikernel would need to implement:

** HVM unikernel on AWS - unikernel must implement DHCP
** HVM unikernel on GCE - unikernel must implement DHCP with option 121
** HVM unikernel on Digital Ocean - unikernel must accept a network interfaces 
file containing static IP address configuration
** HVM unikernel on Rackspace - boo Rackspace - no help for developers!
** HVM unikernel on Softlayer I don’t think this will happen any time soon, but 
out of interest it’s static IP addressing injected at boot time

In fact it would be ideal to be able to pass an /etc/network/interfaces file in 
to a unikernel specifying either dhcp or static IP address configuration. This 
could be a standard approach regardless of which cloud.

Ideally this would be passed in to the unikernel at boot time in an initramfs 
file, which would look like the initrd line in the example grub.cfg here, i.e. 
rootfs.cpio.gz

serial --speed=115200 --word=8 --parity=no --stop=1
terminal_input --append  serial
terminal_output --append serial
set timeout=1
GRUB_TIMEOUT=1
menuentry ‘mirage' {
linux /boot/mirageunikernel root=/dev/ram0 console=ttyS0,115200
initrd /boot/rootfs.cpio.gz
}


Anyhow that’s my request to not only the mirage project but all unikernel 
projects including HALVM - the ability to pass /etc/network/interfaces into the 
kernel at boot time via initramfs

Having had a chat with some of the mirage project folks recently I understand 
the the container approach is more likely to be getting resources than bare 
metal and presumably that provides easy ways to configure networking but I 
thought I’d at least ask.

Hopefully the knowledge here will be valuable to others.

thanks!

Andrew














_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.