[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Linux Stubdom Problem



On Thu, 21 Jul 2011, Jiageng Yu wrote:
> 2011/7/19 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
> > CC'ing Tim and xen-devel
> >
> > On Mon, 18 Jul 2011, Jiageng Yu wrote:
> >> 2011/7/16 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
> >> > On Fri, 15 Jul 2011, Jiageng Yu wrote:
> >> >> 2011/7/15 Jiageng Yu <yujiageng734@xxxxxxxxx>:
> >> >> > 2011/7/15 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
> >> >> >> On Fri, 15 Jul 2011, Jiageng Yu wrote:
> >> >> >>> > Does it mean you are actually able to boot an HVM guest using 
> >> >> >>> > Linux
> >> >> >>> > based stubdoms?? Did you manage to solve the framebuffer problem 
> >> >> >>> > too?
> >> >> >>>
> >> >> >>>
> >> >> >>> The HVM guest is booted. But the boot process is terminated because
> >> >> >>> vga bios is not invoked by seabios. I have got stuck here for a 
> >> >> >>> week.
> >> >> >>>
> >> >> >>
> >> >> >> There was a bug in xen-unstable.hg or seabios that would prevent vga 
> >> >> >> bios from
> >> >> >> being loaded, it should be fixed now.
> >> >> >>
> >> >> >> Alternatively you can temporarely work around the issue with this 
> >> >> >> hacky patch:
> >> >> >>
> >> >> >> ---
> >> >> >>
> >> >> >>
> >> >> >> diff -r 00d2c5ca26fd tools/firmware/hvmloader/hvmloader.c
> >> >> >> --- a/tools/firmware/hvmloader/hvmloader.c   ÂFri Jul 08 18:35:24 
> >> >> >> 2011 +0100
> >> >> >> +++ b/tools/firmware/hvmloader/hvmloader.c   ÂFri Jul 15 11:37:12 
> >> >> >> 2011 +0000
> >> >> >> @@ -430,7 +430,7 @@ int main(void)
> >> >> >> Â Â Â Â Â Â bios->create_pir_tables();
> >> >> >> Â Â }
> >> >> >>
> >> >> >> - Â Âif ( bios->load_roms )
> >> >> >> + Â Âif ( 1 )
> >> >> >> Â Â {
> >> >> >> Â Â Â Â switch ( virtual_vga )
> >> >> >> Â Â Â Â {
> >> >> >>
> >> >> >>
> >> >> >
> >> >> > Yes. Vga bios is booted. However, the upstram qemu receives a SIGSEGV
> >> >> > signal subsequently. I am trying to print the call stack when
> >> >> > receiving the signal.
> >> >> >
> >> >>
> >> >> Hi,
> >> >>
> >> >> Â ÂI find the cause of SIGSEGV signal:
> >> >>
> >> >> Â Âcpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, int
> >> >> len, int is_write)
> >> >> Â Â Â Â Â Â Â Â Â ->memcpy(buf, ptr + (addr & ~TARGET_PAGE_MASK), l);
> >> >>
> >> >> Â Â In my case, ptr=0 and addr=0xc253e, when qemu attempts to vist
> >> >> 0x53e address, the SIGSEGV signal is generated.
> >> >>
> >> >> Â Â I believe the qemu is trying to vist vram in this moment. This
> >> >> code seems no problem, and I will continue to find the root cause.
> >> >>
> >> >
> >> > The vram is allocated by qemu, see hw/vga.c:vga_common_init.
> >> > qemu_ram_alloc under xen ends up calling xen_ram_alloc that calls
> >> > xc_domain_populate_physmap_exact.
> >> > xc_domain_populate_physmap_exact is the hypercall that should ask Xen to
> >> > add the missing vram pages in the guest. Maybe this hypercall is failing
> >> > in your case?
> >>
> >>
> >> Hi,
> >>
> >> Â ÂI continue to invesgate this bug and find hypercall_mmu_update in
> >> qemu_remap_bucket(xc_map_foreign_bulk) is failing:
> >>
> >> do_mmu_update
> >> Â Â Â ->mod_l1_entry
> >> Â Â Â Â Â Â Â-> Âif ( !p2m_is_ram(p2mt) || unlikely(mfn == INVALID_MFN) )
> >> Â Â Â Â Â Â Â Â Â Â Â Â Âreturn -EINVAL;
> >>
> >> Â Âmfn==INVALID_MFN, because :
> >>
> >> mod_l1_entry
> >> Â Â Â ->gfn_to_mfn(p2m_get_hostp2m(pg_dom), l1e_get_pfn(nl1e), &p2mt));
> >> Â Â Â Â Â Â Â ->p2m->get_entry
> >> Â Â Â Â Â Â Â Â Â Â Â Â ->p2m_gfn_to_mfn
> >> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â-> if ( gfn > p2m->max_mapped_pfn )
> >> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â/* This pfn is higher than the
> >> highest the p2m map currently holds */
> >> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âreturn _mfn(INVALID_MFN);
> >>
> >> Â ÂThe p2m->max_mapped_pfn is usually 0xfffff. In our case,
> >> mmu_update.val exceeds 0x8000000100000000. ÂAdditionally, l1e =
> >> l1e_from_intpte(mmu_update.val); gfn=l1e_get_pfn(l1e ). Therefore, gfn
> >> will exceed 0xfffff.
> >>
> >> Â ÂIn the case of minios based stubdom, the mmu_update.vals do not
> >> exceed 0x8000000100000000. Next, I will invesgate why mmu_update.val
> >> exceeds 0x8000000100000000.
> >
> > It looks like the address of the guest that qemu is trying to map is not
> > valid.
> > Make sure you are running a guest with less than 2GB of ram, otherwise
> > you need the patch series that Anthony sent on Friday:
> >
> > http://marc.info/?l=qemu-devel&m=131074042905711&w=2
> 
> Not this problem. I never alloc more than 2GB for the hvm guest. The
> call stack in qemu is:
> 
> qemu_get_ram_ptr
>       ->qemu_map_cache(addr, 0, 1)
>                  -> if (!entry->vaddr_base || entry->paddr_index !=
> address_index ||
>                                           !test_bit(address_offset >>
> XC_PAGE_SHIFT, entry->valid_mapping)) {
>                            qemu_remap_bucket(entry, size ? :
> MCACHE_BUCKET_SIZE, address_index);
>                                  ->xc_map_foreign_bulk(xen_xc,
> xen_domid, PROT_READ|PROT_WRITE,
> 
>                 pfns, err, nb_pfn);
> 
> The qemu tries to map pages from hvm guest(xen_domid) to linux
> stubdom. But some hvm pages' pfns are larger than 0xfffff. So, in the
> p2m_gfn_to_mfn, the judgement condition is valid:(p2m->max_mapped_pfn
> = 0xfffff)
> 
>     if ( gfn > p2m->max_mapped_pfn )
>         /* This pfn is higher than the highest the p2m map currently holds */
>         return _mfn(INVALID_MFN);
> 
>  In minios stubdom case, the hvm pages' pfns do not exceed 0xfffff.
> Maybe the address translation in linux stubdom cause this probem?

Trying to map a pfn > 0xfffff is clearly a mistake if the guest's memory
does not exceed 2G:

0xfffff * 4096 > 2G


>  BTW, in minios stubdom case, there seems no hvmloader process. Is it
> needed in linux stubdom?

hvmloader is the first thing that runs within the guest, it is not a
process in the stubdom or in dom0.
It is required in both minios and linux stubdoms.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.