[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Xen PVH domU start-of-day VCPU state



On Tuesday, 26.05.2020 at 18:30, Roger Pau Monné wrote:
> > Turns out that the .note.solo5.xen section as defined in boot.S was not
> > marked allocatable, and that was doing <something> that was confusing our
> > linker script[1] (?).
> 
> Hm, I would have said there was no need to load notes into memory, and
> hence using a MemSize of 0 would be fine.
> 
> Maybe libelf loader was somehow getting confused and not loading the
> image properly?
> 
> Can you paste the output of `xl -vvv create ...` when using the broken
> image?

Here you go:

Parsing config from ./test_hello.xl
libxl: debug: libxl_create.c:1671:do_domain_create: Domain 0:ao 0x5593c42e7e30: 
create: how=(nil) callback=(nil) poller=0x5593c42e7670
libxl: debug: libxl_create.c:1007:initiate_domain_create: Domain 2:running 
bootloader
libxl: debug: libxl_bootloader.c:335:libxl__bootloader_run: Domain 2:no 
bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch 
w=0x5593c42e9590: deregister unregistered
libxl: debug: libxl_sched.c:82:libxl__set_vcpuaffinity: Domain 2:New soft 
affinity for vcpu 0 has unreachable cpus
domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
domainbuilder: detail: xc_dom_kernel_file: filename="test_hello.xen"
domainbuilder: detail: xc_dom_malloc_filemap    : 191 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.11, caps xen-3.0-x86_64 
xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
domainbuilder: detail: loader probe OK
xc: detail: ELF: phdr: paddr=0x100000 memsz=0x6264
xc: detail: ELF: phdr: paddr=0x107000 memsz=0xed48
xc: detail: ELF: memory: 0x100000 -> 0x115d48
xc: detail: ELF: note: PHYS32_ENTRY = 0x100020
xc: detail: ELF: Found PVH image
xc: detail: ELF: VIRT_BASE unset, using 0
xc: detail: ELF_PADDR_OFFSET unset, using 0
xc: detail: ELF: addresses:
xc: detail:     virt_base        = 0x0
xc: detail:     elf_paddr_offset = 0x0
xc: detail:     virt_offset      = 0x0
xc: detail:     virt_kstart      = 0x100000
xc: detail:     virt_kend        = 0x115d48
xc: detail:     virt_entry       = 0x1001e0
xc: detail:     p2m_base         = 0xffffffffffffffff
domainbuilder: detail: xc_dom_parse_elf_kernel: hvm-3.0-x86_32: 0x100000 -> 
0x115d48
domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: range: start=0x0 end=0x10000400
domainbuilder: detail: xc_dom_malloc            : 512 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000c00
xc: detail:   2MB PAGES: 0x000000000000007a
xc: detail:   1GB PAGES: 0x0000000000000000
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x16 
at 0x7f5609445000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 
0x116000  (pfn 0x100 + 0x16 pages)
xc: detail: ELF: phdr 1 at 0x7f5609445000 -> 0x7f560944b264
xc: detail: ELF: phdr 2 at 0x7f560944c000 -> 0x7f5609453120
domainbuilder: detail: xc_dom_load_acpi: 64 bytes at address fc008000
domainbuilder: detail: xc_dom_load_acpi: 4096 bytes at address fc000000
domainbuilder: detail: xc_dom_load_acpi: 28672 bytes at address fc001000
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x116+0x1 
at 0x7f5609ace000
domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x116000 -> 
0x117000  (pfn 0x116 + 0x1 pages)
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x117000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: 
xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: 
hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: 
hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 515 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 191 kB
domainbuilder: detail:       domU mmap          : 92 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff000
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff001
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_event.c:2194:libxl__ao_progress_report: ao 0x5593c42e7e30: 
progress report: callback queued aop=0x5593c42fea10
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x5593c42e7e30: 
complete, rc=0
libxl: debug: libxl_event.c:1404:egc_run_callbacks: ao 0x5593c42e7e30: progress 
report: callback aop=0x5593c42fea10
libxl: debug: libxl_create.c:1708:do_domain_create: Domain 0:ao 0x5593c42e7e30: 
inprogress: poller=0x5593c42e7670, flags=ic
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x5593c42e7e30: destroy
xencall:buffer: debug: total allocations:233 total releases:233
xencall:buffer: debug: current allocations:0 maximum allocations:3
xencall:buffer: debug: cache current size:3
xencall:buffer: debug: cache hits:215 misses:3 toobig:15
xencall:buffer: debug: total allocations:0 total releases:0
xencall:buffer: debug: current allocations:0 maximum allocations:0
xencall:buffer: debug: cache current size:0
xencall:buffer: debug: cache hits:0 misses:0 toobig:0

> 
> > 
> > If I make this simple change:
> > 
> > --- a/bindings/xen/boot.S
> > +++ b/bindings/xen/boot.S
> > @@ -32,7 +32,7 @@
> >  #define ENTRY(x) .text; .globl x; .type x,%function; x:
> >  #define END(x)   .size x, . - x
> > 
> > -.section .note.solo5.xen
> > +.section .note.solo5.xen, "a", @note
> > 
> >         .align  4
> >         .long   4
> > 
> > then I get the expected output from readelf -lW, and I can get as far as
> > the C _start() with no issues!
> > 
> > FWIW, here's the diff of readelf -lW before/after:
> > 
> > --- before  2020-05-26 17:36:46.117885855 +0200
> > +++ after   2020-05-26 17:38:07.090508322 +0200
> > @@ -8,9 +8,9 @@
> >    INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 
> > 0x000018 R   0x8
> >        [Requesting program interpreter: /nonexistent/solo5/]
> >    LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00615c 
> > 0x00615c R E 0x1000
> > -  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 
> > 0x00ed28 RW  0x1000
> > +  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x006120 
> > 0x00dd28 RW  0x1000
> 
> This seems suspicious, there's a change of the size of the LOAD
> section, but your change to the note type should not affect the LOAD
> section?

Indeed.

> 
> Hm, maybe it does because the .note.solo5.xen was considered writable
> by default?

I don't think so. From the broken image:

  [ 8] .note.solo5.xen   NOTE             00000000001070c4  0000f120
       0000000000000014  0000000000000000           0     0     4

>From the good image:

  [ 8] .note.solo5.xen   NOTE             00000000001070c4  000080c4
       0000000000000014  0000000000000000   A       0     0     4

-mato



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.