[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] pvgrub



So when I made the image I made 1 partition and its ext3. So the boot
information is on the same root partition as the rest of the system. I
don't know if that might be an issue. Its definitely ext3 though.

Dave

On Fri, 2010-04-02 at 11:45 -0700, Jeremy Fitzhardinge wrote:
> On 04/02/2010 10:21 AM, David P. Quigley wrote:
> > So a little bit of background.
> >
> > I have a vm which I know boots properly as an HVM and I want to run it
> > as a paravirt guest. It is a fedora 11 based image that I built from a
> > kickstart so I know the kernel has paravirt guest support. I have built
> > the latest xen-unstable tree including the stub domains to get pvgrub to
> > attempt to boot from. My domU config has the following lines in it.
> >
> > kernel = "/usr/lib/xen/boot/pv-grub-x86_64.gz"
> > extra = "(hd0,0)/boot/grub/menu.lst"
> >
> > If I remove my storage devices from the config it boots into the grub
> > console so I know the stub domain is working. When I leave the the
> > storage devices in and boot with xm create -c<configfile>  I get the
> > output below and then it hangs.
> >
> > Is there any additional debug information that I can grab to try to
> > figure this out?
> >    
> 
> Make sure your /boot is ext3, not ext4.  I found that pvgrub doesn't 
> seem to notice the "extents" feature flag, and will drop into an 
> infinite loop if it encounters a directory with extents (probably any 
> extent-based file will make it upset in some way).
> 
>      J
> 
> > Dave
> >
> > # xm create -c domU-pv.conf
> > /usr/lib64/python2.6/site-packages/xen/xm/group.py:23: DeprecationWarning: 
> > the sets module is deprecated
> >    from sets import Set
> > Using config file "./domU-pv.conf".
> > Started domain SEHostStorage (id=8)
> >                                     Xen Minimal OS!
> >    start_info: 0xaa6000(VA)
> >      nr_pages: 0x20000
> >    shared_inf: 0xbfa56000(MA)
> >       pt_base: 0xaa9000(VA)
> > nr_pt_frames: 0x9
> >      mfn_list: 0x9a6000(VA)
> >     mod_start: 0x0(VA)
> >       mod_len: 0
> >         flags: 0x0
> >      cmd_line: (hd0,0)/boot/grub/menu.lst
> >    stack:      0x965980-0x985980
> > MM: Init
> >        _text: 0x0(VA)
> >       _etext: 0x69774(VA)
> >     _erodata: 0x8f000(VA)
> >       _edata: 0x97ae0(VA)
> > stack start: 0x965980(VA)
> >         _end: 0x9a5f88(VA)
> >    start_pfn: ab5
> >      max_pfn: 20000
> > Mapping memory range 0xc00000 - 0x20000000
> > setting 0x0-0x8f000 readonly
> > skipped 0x1000
> > MM: Initialise page allocator for baf000(baf000)-20000000(20000000)
> > MM: done
> > Demand map pfns at 20001000-2020001000.
> > Heap resides at 2020002000-4020002000.
> > Initialising timer interface
> > Initialising console ... done.
> > gnttab_table mapped at 0x20001000.
> > Initialising scheduler
> > Thread "Idle": pointer: 0x2020002050, stack: 0xcc0000
> > Initialising xenbus
> > Thread "xenstore": pointer: 0x2020002800, stack: 0xcd0000
> > Dummy main: start_info=0x985a80
> > Thread "main": pointer: 0x2020002fb0, stack: 0xce0000
> > Thread "pcifront": pointer: 0x2020003760, stack: 0xcf0000
> > "main" "(hd0,0)/boot/grub/menu.lst"
> > pcifront_watches: waiting for backend path to appear device/pci/0/backend
> > vbd 768 is hd0
> > ******************* BLKFRONT for device/vbd/768 **********
> >
> >
> > backend at /local/domain/0/backend/vbd/8/768
> > Failed to read /local/domain/0/backend/vbd/8/768/feature-flush-cache.
> > 2097152 sectors of 512 bytes
> > **************************
> > vbd 5632 is hd1
> > ******************* BLKFRONT for device/vbd/5632 **********
> >
> >
> > backend at /local/domain/0/backend/vbd/8/5632
> >
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> >
> >    


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.