[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Kernel oops with first LVM2 try



Hi guys,

well, solved this one. Rather inelegant response to not enough RAM assigned to 
Dom-0. I was using 128MB. Upped it to 256MB and things seem to work okay 
(will this be enough?). Are any changes afoot to handle this situation 
better? Or is it more something for the Linux kernel folks to attend to?

Regards,
Paul.

On Tuesday 28 September 2004 04:04 pm, Paul Dorman wrote:
> Hi all,
>
> I'm just beginning my LVM experiments. Had no trouble creating an LVM2 LV
> and populating it with a file system. When I went to boot my shiny new
> filesystem however I got an oops. I was going to repeat the exercise with
> another loopback image and a duplicate of what was in the LVM LV, but the
> server rebooted during a copy from /mnt/lvmlv to /mnt/loopback. Perhaps
> these are connected?
>
> I'll be happy to do straces etc, but I'll need specific instructions as I'm
> not a programmer :o). Also, I don't yet have a serial cable for debugging,
> so my information will surely be a little limited.
>
> This is all with pristine kernel-2.6.8.1 Xen (bk pull as of yesterday)
> built with make world. I'll try to repeat the experiment tomorrow with a
> loopback device. I lost my partition table just now (my server had rebooted
> anyhow) trying to get grub to work better (refused to give me the menu), so
> gpart's doing it's scanning thing right now and I expect it's going to take
> hours :o(
>
> Thanks all. I hope this isn't something obvious or stupid on my part (no
> pun intended).
>
> Paul
>
> Here's my initial test xmdefconfig file:
>
> #  -*- mode: python; -*-
> #==========================================================================
>== # Python configuration setup for 'xm create'.
> # This script sets the parameters used when a domain is created using 'xm
> create'.
> # You use a separate script for each domain you want to create, or
> # you can set the parameters for the domain on the xm command line.
> #==========================================================================
>==
>
> #--------------------------------------------------------------------------
>-- # Kernel image file.
> kernel = "/boot/vmlinuz-2.6.8.1-xenU"
>
> # Initial memory allocation (in megabytes) for the new domain.
> memory = 256
>
> # A name for your domain. All domains must have different names.
> name = "domain1"
>
> # Which CPU to start domain on?
> #cpu = -1   # leave to Xen to pick
>
> #--------------------------------------------------------------------------
>-- # Define network interfaces.
>
> # Number of network interfaces. Default is 1.
> #nics=1
>
> # Optionally define mac and/or bridge for the network interfaces.
> # Random MACs are assigned if not given.
> #vif = [ 'mac=aa:00:00:00:00:11, bridge=xen-br0' ]
>
> #--------------------------------------------------------------------------
>-- # Define the disk devices you want the domain to have access to, and #
> what you want them accessible as.
> # Each disk entry is of the form phy:UNAME,DEV,MODE
> # where UNAME is the device, DEV is the device name the domain will see,
> # and MODE is r for read-only, w for read-write.
>
> disk = [ 'phy:xenvg/lv_test,sda1,w' ]
> #,
> #     'phy:xenvg/lv_test_swp,sda2,w' ] #for swap
>
> #--------------------------------------------------------------------------
>-- # Set the kernel command line for the new domain.
> # You only need to define the IP parameters and hostname if the domain's
> # IP config doesn't, e.g. in ifcfg-eth0 or via DHCP.
> # You can use 'extra' to set the runlevel and custom environment
> # variables used by custom rc scripts (e.g. VMID=, usr= ).
>
> # Set if you want dhcp to allocate the IP address.
> #dhcp="dhcp"
> #ip = "10.10.10.4"
> #netmask = "255.255.255.0"
> #gateway = "10.10.10.1"
>
> # Set netmask.
> #netmask=
> # Set default gateway.
> #gateway=
> # Set the hostname.
> #hostname= "vm%d" % vmid
>
> # Set root device.
> root = "/dev/sda1 ro"
>
> # Root device for nfs.
> #root = "/dev/nfs"
> # The nfs server.
> #nfs_server = '169.254.1.0'
> # Root directory on the nfs server.
> #nfs_root   = '/full/path/to/root/directory'
>
> # Sets runlevel 4.
> extra = "4"
>
> #--------------------------------------------------------------------------
>-- # Set according to whether you want the domain restarted when it exits. #
> The default is 'onreboot', which restarts the domain when it shuts down #
> with exit code reboot.
> # Other values are 'always', and 'never'.
>
> #restart = 'onreboot'
>
> #==========================================================================
>==
>
> And here's what I got when I tried to start it:
>
> Xen1:~# xm create -c vmid=1
> Using config file "/etc/xen/xmdefconfig".
> Started domain domain1, console on port 9607
> ************ REMOTE CONSOLE: CTRL-] TO QUIT ********
> Linux version 2.6.8.1-xenU (root@Xen1) (gcc version 3.3.4 (Debian
> 1:3.3.4-6sarge1)) #1 Fri Sep 24 10:43:11 NZST 2004
> BIOS-provided physical RAM map:
>  Xen: 0000000000000000 - 0000000010000000 (usable)
> 256MB LOWMEM available.
> DMI not present.
> Built 1 zonelists
> Kernel command line:  ip=dhcp root=/dev/sda1 ro 4
> Initializing CPU#0
> PID hash table entries: 2048 (order 11: 16384 bytes)
> Xen reported: 2791.047 MHz processor.
> Using tsc for high-res timesource
> Dentry cache hash table entries: 65536 (order: 6, 262144 bytes)
> Inode-cache hash table entries: 32768 (order: 5, 131072 bytes)
> Memory: 256584k/262144k available (1507k kernel code, 5224k reserved,
> 452k data, 92k init, 0k highmem)
> Checking if this processor honours the WP bit even in supervisor mode...
> Ok. Calibrating delay loop... 5570.56 BogoMIPS
> Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
> CPU: Trace cache: 12K uops, L1 D cache: 8K
> CPU: L2 cache: 512K
> CPU: Intel(R) Xeon(TM) CPU 2.80GHz stepping 05
> Enabling unmasked SIMD FPU exception support... done.
> Checking 'hlt' instruction... disabled
> NET: Registered protocol family 16
> Initializing Cryptographic API
> RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
> Xen virtual console successfully installed as tty
> Event-channel device installed.
> Initialising Xen virtual block device
> Using anticipatory io scheduler
> Initialising Xen virtual ethernet frontend driver.
> Netfront recovered tx=0 rxfree=0
> NET: Registered protocol family 2
> IP: routing cache hash table of 2048 buckets, 16Kbytes
> TCP: Hash tables configured (established 16384 bind 32768)
> NET: Registered protocol family 1
> NET: Registered protocol family 17
> IP-Config: Guessing netmask 255.0.0.0
> IP-Config: Complete:
>       device=eth0, addr=62.0.0.0, mask=255.0.0.0, gw=255.255.255.255,
>      host=62.0.0.0, domain=, nis-domain=(none),
>      bootserver=255.255.255.255, rootserver=255.255.255.255, rootpath=
> ReiserFS: sda1: found reiserfs format "3.6" with standard journal
> ReiserFS: sda1: using ordered data mode
> ReiserFS: sda1: journal params: device sda1, size 8192, journal first
> block 18, max trans len 1024, max batch 900, max commit age 30, max
> trans age 30
> ReiserFS: sda1: checking transaction log (sda1)
> ReiserFS: sda1: Using r5 hash to sort names
> VFS: Mounted root (reiserfs filesystem) readonly.
> Freeing unused kernel memory: 92k freed
> INIT: version 2.86 booting
> Unable to handle kernel paging request at virtual address cfc00000
>  printing eip:
> c0194ca1
> *pde = ma 07c1c067 pa 0003f067
> *pte = ma 00000000 pa 55555000
>  [<c0142ffe>] do_no_page+0x241/0x3ca
>
> Oops: 0002 [#1]
> PREEMPT
> Modules linked in:
> CPU:    0
> EIP:    0061:[<c0194ca1>]    Not tainted
> EFLAGS: 00010202   (2.6.8.1-xenU)
> EIP is at reiserfs_readdir+0x3c8/0x586
> eax: 000100c4   ebx: ffffa5d4   ecx: 3ff37121   edx: cf8e1eb0
> esi: cffb3721   edi: cfc00000   ebp: cfd5e0b4   esp: cf8e1e0c
> ds: 0069   es: 0069   ss: 0069
> Process rcS (pid: 51, threadinfo=cf8e0000 task=c1332c50)
> Stack: cf8e1ed0 cf8e1ef0 00000001 d08bafb8 ffffffff 00000000 00000000
> 000ce510 cfc8f000 ffff0000 000100c4 20000000 00000000 d08bafb9 00000000
> cf8e1eb0 cfc8f030 0000ce51 00000001 cfc92c5c cfd5fccc cfc92c5c 00000001
> cfc8f030 Call Trace:
>  [<c0142ffe>] do_no_page+0x241/0x3ca
>
> Code: f3 a5 f6 c3 02 74 02 66 a5 f6 c3 01 74 01 a4 8b 8c 24 6c 01
>  <1>Unable to handle kernel NULL pointer dereference at virtual
> address 0000010f printing eip:
> c0143e43
> *pde = ma 00000000 pa 55555000
>  [<c0145c28>] exit_mmap+0x133/0x15b
>
>  [<c011a030>] mmput+0x66/0x8d
>
>  [<c011e2b7>] do_exit+0x150/0x41a
>
>  [<c010aaa0>] do_divide_error+0x0/0xfa
>
>  [<c0115627>] do_page_fault+0x226/0x641
>
>  [<c010db14>] page_fault+0x38/0x40
>
>  [<c0194ca1>] reiserfs_readdir+0x3c8/0x586
>
>  [<c0142ffe>] do_no_page+0x241/0x3ca
>
> Oops: 0002 [#2]
> PREEMPT
> Modules linked in:
> CPU:    0
> EIP:    0061:[<c0143e43>]    Not tainted
> EFLAGS: 00010202   (2.6.8.1-xenU)
> EIP is at __remove_shared_vm_struct+0x1b/0x5d
> eax: ffffffff   ebx: cfd77ba0   ecx: cfc8cb80   edx: cf8e45e4
> esi: cfc8cb80   edi: cfd77ba0   ebp: 0000000b   esp: cf8e1c78
> ds: 0069   es: 0069   ss: 0069
> Process rcS (pid: 51, threadinfo=cf8e0000 task=c1332c50)
> Stack: cf8e0000 c0143ec0 cfd77ba0 cfc8cb80 cf8e45e4 cfd77bf4 cfd77ba0
> cfd6e680 c0145c28 cfd77ba0 00000000 00000000 00000000 ffffffff cf8e1cbc
> 00000000 c03091ec 00000066 cfd6e680 cfd6e6a0 c1332c50 c011a030 cfd6e680
> c030a40c Call Trace:
>  [<c0143ec0>] remove_vm_struct+0x3b/0x97
>
>  [<c0145c28>] exit_mmap+0x133/0x15b
>
>  [<c011a030>] mmput+0x66/0x8d
>
>  [<c011e2b7>] do_exit+0x150/0x41a
>
>  [<c010aaa0>] do_divide_error+0x0/0xfa
>
>  [<c0115627>] do_page_fault+0x226/0x641
>
>  [<c010db14>] page_fault+0x38/0x40
>
>  [<c0194ca1>] reiserfs_readdir+0x3c8/0x586
>
>  [<c0142ffe>] do_no_page+0x241/0x3ca
>
> Code: ff 80 10 01 00 00 8b 43 14 a8 08 74 07 83 6a 24 01 8b 43 14
>  <6>note: rcS[51] exited with preempt_count 2
> Unable to handle kernel NULL pointer dereference at virtual address
> 000000e7 printing eip:
> c015d0b6
> *pde = ma 00000000 pa 55555000
>  [<c01635f7>] do_select+0x261/0x2c8
>
>  [<c01631f1>] __pollwait+0x0/0xc6
>
>  [<c0163933>] sys_select+0x2b0/0x4a8
>
>  [<c015051d>] sys_close+0x63/0x96
>
>  [<c010d7b7>] syscall_call+0x7/0xb
>
> Oops: 0000 [#3]
> PREEMPT
> Modules linked in:
> CPU:    0
> EIP:    0061:[<c015d0b6>]    Not tainted
> EFLAGS: 00010246   (2.6.8.1-xenU)
> EIP is at pipe_poll+0x1b/0x7c
> eax: cf8efce4   ebx: ffffffff   ecx: 00000000   edx: 00000000
> esi: cfc8c680   edi: 0000000a   ebp: 0000000a   esp: cffa3edc
> ds: 0069   es: 0069   ss: 0069
> Process init (pid: 1, threadinfo=cffa2000 task=cffa1670)
> Stack: cffa1670 00000000 cffa3f44 cfc8c680 00000400 c01635f7 cfc8c680
> 00000000 00000000 00000000 00000400 00000000 00000000 00000000 00000145
> 00000400 cffa2000 cfc70d4c cfc70d48 cfc70d44 cfc70d54 cfc70d50 cfc70d4c
> 000001ec Call Trace:
>  [<c01635f7>] do_select+0x261/0x2c8
>
>  [<c01631f1>] __pollwait+0x0/0xc6
>
>  [<c0163933>] sys_select+0x2b0/0x4a8
>
>  [<c015051d>] sys_close+0x63/0x96
>
>  [<c010d7b7>] syscall_call+0x7/0xb
>
> Code: 8b 8b e8 00 00 00 74 17 85 c9 74 13 89 4c 24 04 89 54 24 08
>  <0>Kernel panic: Attempted to kill init!
>  <1>Unable to handle kernel NULL pointer dereference at virtual
> address 00000004 printing eip:
> c013be54
> *pde = ma 00000000 pa 55555000
>  [<c013c683>] drain_array+0x7c/0xb0
>
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
> Project Admins to receive an Apple iPod Mini FREE for your judgement on
> who ports your project to Linux PPC the best. Sponsored by IBM.
> Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/xen-devel


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.