[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: MirageOS on OpenStack problem



Hi,

I have been investigating some more, and I seem to be a 'virtio-block device' problem. On the OpenStack cloud this device is reported when Solo5 boots, but not on my local installation.

I changed from IDE (default) to virtio as a drive on my local installation, and that gives the same behaviour as on the cloud. So the behaviour can be shown with these two different examples.

Using a IDE drive for the image:
$ qemu-system-x86_64 -cpu Westmere -m 128 -nodefaults -no-acpi -display none -serial stdio -device virtio-net,netdev=n0 -netdev tap,id=n0,ifname=tap100,script=no,downscript=no -device isa-debug-exit -device lsi,id=scsi0,bus=pci.0,addr=0x9 \
-drive file=/home/hans/src/5g/mirage/repository.qcow2,format=qcow2,if=none,id=drive-ide0-0-0 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1

SYSLINUX 6.03 20171017 Copyright (C) 1994-2014 H. Peter Anvin et al
Loading unikernel.bin... ok
            |      ___|
  __|  _ \  |  _ \ __ \
\__ \ (   | | (   |  ) |
____/\___/ _|\___/____/
Solo5: Bindings version v0.6.7
Solo5: Memory map: 128 MB addressable:
Solo5:   reserved @ (0x0 - 0xfffff)
Solo5:       text @ (0x100000 - 0x481fff)
Solo5:     rodata @ (0x482000 - 0x51dfff)
Solo5:       data @ (0x51e000 - 0x74efff)
Solo5:       heap >= 0x74f000 < stack < 0x8000000
Solo5: Clock source: TSC, frequency estimate is 2808856460 Hz
Solo5: PCI:00:02: virtio-net device, base=0xc100, irq=10
Solo5: PCI:00:02: configured, mac=52:54:00:12:34:56, features=0x79bfffe7
Solo5: Application acquired 'service' as network device
2020-10-25 12:16:34 -00:00: INF [netif] Plugging into service with mac 52:54:00:12:34:56 mtu 1500
2020-10-25 12:16:34 -00:00: INF [ethernet] Connected Ethernet interface 52:54:00:12:34:56

Using virtio drive for the image:
$ qemu-system-x86_64 -cpu Westmere -m 128 -nodefaults -no-acpi -display none -serial stdio -device virtio-net,netdev=n0 -netdev tap,id=n0,ifname=tap100,script=no,downscript=no -device isa-debug-exit \
-drive file=/home/hans/src/hermod-5g/mirage/repository.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xa,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1

SYSLINUX 6.03 20171017 Copyright (C) 1994-2014 H. Peter Anvin et al
Loading unikernel.bin... ok
            |      ___|
  __|  _ \  |  _ \ __ \
\__ \ (   | | (   |  ) |
____/\___/ _|\___/____/
Solo5: Bindings version v0.6.7
Solo5: Memory map: 127 MB addressable:
Solo5:   reserved @ (0x0 - 0xfffff)
Solo5:       text @ (0x100000 - 0x481fff)
Solo5:     rodata @ (0x482000 - 0x51dfff)
Solo5:       data @ (0x51e000 - 0x74efff)
Solo5:       heap >= 0x74f000 < stack < 0x7ffe000
Solo5: Clock source: TSC, frequency estimate is 2809165540 Hz
Solo5: PCI:00:02: virtio-net device, base=0xc040, irq=10
Solo5: PCI:00:02: configured, mac=52:54:00:12:34:56, features=0x79bfffe7
Solo5: PCI:00:0a: virtio-block device, base=0xc000, irq=10
Solo5: PCI:00:0a: configured, capacity=2097152 sectors, features=0x79000e54
qemu-system-x86_64: virtio: zero sized buffers are not allowed

The VM uses 100% CPU and needs to be killed hard (-KILL).

The system is:
$ uname -a
Linux hans-Latitude-E7470 4.15.0-118-generic #119-Ubuntu SMP Tue Sep 8 12:30:01 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/issue
Ubuntu 18.04.5 LTS \n \l

Is the problem that I need to set up handlers etc. to handle a virtio block device in the MirageOS application? I can't find anything about this in the documentation. Or might it be a bug in Solo5/MirageOS?

Regards,

Hans Ole


On Sat, Oct 17, 2020 at 8:04 PM Hans Ole Rafaelsen <hrafaelsen@xxxxxxxxx> wrote:
Hi Martin,

Thanks for your answer.

On Fri, Oct 16, 2020 at 2:15 PM Martin Lucina <martin@xxxxxxxxxx> wrote:
Hi Hans,

On Sunday, 11.10.2020 at 19:55, Hans Ole Rafaelsen wrote:
> Hi,
>
> I'm trying to run some of the tutorial examples on OpenStack. This is a
> "Nokia AirFrame Cloud Infrastructure" with "OpenStack Compute version
> 17.0.7-1"

Any idea what QEMU version that uses internally?
I have asked the provider. I'll let you know if I get a reply.


>
> I have built a virtio target and created a qcow2 image.
>
> When running this on OpenStack it seems to start the Solo5 execution
> environment, but the MirageOS application does not seem to start. The log
> only show:

Which example, specifically?
First the network example, and later the "hello world" example, taken from https://github.com/mirage/mirage-skeleton


> SYSLINUX 6.03 20171017 Copyright (C) 1994-2014 H. Peter Anvin et al
> Loading unikernel.bin... ok
>             |      ___|
>   __|  _ \  |  _ \ __ \
> \__ \ (   | | (   |  ) |
> ____/\___/ _|\___/____/
> Solo5: Bindings version v0.6.6
> Solo5: Memory map: 1024 MB addressable:
> Solo5:   reserved @ (0x0 - 0xfffff)
> Solo5:       text @ (0x100000 - 0x47dfff)
> Solo5:     rodata @ (0x47e000 - 0x519fff)
> Solo5:       data @ (0x51a000 - 0x74afff)
> Solo5:       heap >= 0x74b000 < stack < 0x40000000
> Solo5: Clock source: KVM paravirtualized clock
> Solo5: PCI:00:03: virtio-net device, base=0xc060, irq=11
> Solo5: PCI:00:03: configured, mac=fa:16:3e:a6:51:b7, features=0x48bf81a6
> Solo5: PCI:00:04: virtio-block device, base=0xc000, irq=11
> Solo5: PCI:00:04: configured, capacity=125829120 sectors, features=0x79000e54
>
> Not sure if it is a problem with getting the log output from the
> application or if it is not starting at all. The cloud infrastructure says
> the VM is active, but it does not show the actual resource usage of the VM,
> so it is hard to say what is going on.
> For network applications it does not respond to ping, so it seems like it
> is not running.

Try adding "--logs=*:debug" to the kernel command line.
I tried adding it during the "mirage configure" step. On the local qemu VM I get additional debugging info, but on the OpenStack cloud I get the same result. No logs from the application itself.
 

>
> The qcow2 image runs fine on a local (Ubuntu 18.04 host) qemu/kvm VM and I
> have no problem running other qcow2 images (Ubuntu) on the OpenStack cloud.
>
> Is there some limitation on the images created with mirage/solo5 that
> prevents them from running on the OpenStack platform?

I suspect no one's tried on OpenStack. Also, be aware that the virtio
target is somewhat limited; all the exciting stuff is going on in the other
Solo5-based targets.

I tried another post about that on the mailing list, but it seems like it has got stuck. (This post took 2 days before showing up on the list.) 

Is there some way to make a hvt target to run on qemu/kvm?

--
Hans Ole

-mato

>
> Any tips on how to investigate this problem?
>
> Regards,
>
> Hans Ole Rafaelsen

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.