[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] strange behavior with Multiboot2 on EFI

On Thu, Jun 28, 2018 at 12:26:01PM +0300, Kristaps Čivkulis wrote:
> Roger provided Xen kernel binary for me and it worked. I don't know
> why I couldn't build it properly on FreeBSD.

Please try to execute "make distclean" before the build. This will
cleanup whole Xen source tree. Sometimes it happens that stale files
are lurking around and they are taken into xen.gz output.

> >> menuentry 'Xen kernel' {
> >>         set root='(hd0,1)'
> >>         multiboot2 /xen
> >
> > I think that you should add at least this to Xen command line:
> >   dom0_mem=1g,max:1g guest_loglvl=all loglvl=all sync_console 
> > com1=115200,8n1 console=com1,vga
> >
> > And what about dom0 kernel? module2?
> At first I tried to load Xen kernel only.
> Is 'module2' the same as 'module' but only for multiboot2? There isn't


> information on GRUB manual [0].
> Also, how should dom0 be provided to Xen? Is passing it as multiboot2
> module enough for Xen kernel to understand?

Yep, but please do not forget about at least standard arguments for the
kernel, e.g. root=, etc.

> >> sudo qemu-system-x86_64 \
> >>        -hda linux.img \
> >>        -bios OVMF-pure-efi.fd \
> >>        -m 4096 \
> >>        -debugcon file:debug.log -global isa-debugcon.iobase=0x402
> >
> > You are missing at least serial console and GDB setup. I would suggest
> > that you add to the QEMU command line at least this:
> >   -serial telnet::10232,server,nowait -gdb tcp::10234
> I was using QEMU built in serial console (View -> serial0) and by
> default I can connect to QEMU with gdb by command "target remote
> localhost:1234".


> > Hence, you are able to get load offset using link_base_addr and 
> > load_base_addr.
> > Then add load offset to the multiboot2 UEFI entry point. After that set 
> > breakpoint
> > using "hb" in GDB (hardware assisted breakpoint). Do not use "b". IIRC it is
> > software breakpoint (int 3) and it will not work here because the int 3 
> > opcode
> > is overwritten by the GRUB2 during final Xen code relocation. In general I 
> > suggest
> > you to use "hb". It is more reliable.
> Thanks!
> On my FreeBSD multiboot2 loader implementation Xen kernel produces
> following output:
>  Xen 4.11-rc
> (XEN) Xen version 4.11-rc (root@xenrtcloud) (gcc (Debian 4.9.2-10)
> 4.9.2) debug=y  Fri Jun 22 09:29:19 UTC 2018
> (XEN) Latest ChangeSet:
> (XEN) Bootloader: unknown

I would suggest that you provide bootloader type information to the Xen
using relevant tag. This will ease bootloader differentiation in the

> (XEN) Command line: dom0_max_vcpus=4 dom0pvh=1 console=com1,vga
> com1=115200,8n1 guest_loglvl=all loglvl=all
> (XEN) Xen image load base address: 0
> (XEN) Video information:
> (XEN)  VGA is graphics mode 2048x2048, 32 bpp
> (XEN) Disc information:
> (XEN)  Found 0 MBR signatures
> (XEN)  Found 1 EDD information structures
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) dom0 kernel not specified. Check bootloader configuration.

Please load dom0 kernel to avoid this issue.

> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> The problem is with line
> (XEN) Xen image load base address: 0
> Which is not true, because I loaded it into 0x200000. I also provide
> image load base physical address tag with the same value. Is there
> something else I should set to provide Xen with correct load base
> address?

If the image is not relocated (is it?) then it is correct. You have to
differentiate two addresses:
  - __image_base__ which is "Xen image load base address"
    and equals 0 (zero) in your case,
  - start of image which marks the beginning of the Xen
    image and is equal 0x200000 in your case.

Please take a look at xen/arch/x86/xen.lds.S and
xen/arch/x86/boot/head.S for more details.

I hope that helps.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.