[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Emulation error and calltrace about domUs qemu process testing latest xen from source (git master)



Dom0: wheezy 64 bit with 3.2 kernel from official repository, xen based on commit 801ab48e5556cb54f67e3cb57f077f47e8663ced. Booting windows 7 64 bit domU with new pv drivers, domU hang, dom0 show a calltrace and was a over 30 of load. The only particular thing is that I added udev files from older xen version to make possible boot correctly windows domUs with new pv drivers (except with custom vifname).

From xl dmesg:
(XEN) mm.c:870:d0v4 pg_owner 0 l1e_owner 0, but real_pg_owner 1
(XEN) mm.c:941:d0v4 Error getting mfn 2c3fd3 (pfn 75dd3) from L1 entry 80000002c3fd3767 for l1e_owner=0, pg_owner=0
(XEN) mm.c:5074:d0v4 ptwr_emulate: could not get_page_from_l1e()

Here the calltrace taken from log:
Sep 1 14:25:41 testVS01OU kernel: [ 355.798759] BUG: unable to handle kernel paging request at ffff880003008da8 Sep 1 14:25:41 testVS01OU kernel: [ 355.798892] IP: [<ffffffff81030abd>] ptep_set_access_flags+0x21/0x45 Sep 1 14:25:41 testVS01OU kernel: [ 355.798983] PGD 1606067 PUD 160a067 PMD 3d39067 PTE 8010000003008065
Sep  1 14:25:41 testVS01OU kernel: [  355.799180] Oops: 0003 [#1] SMP
Sep  1 14:25:41 testVS01OU kernel: [  355.799298] CPU 4
Sep 1 14:25:41 testVS01OU kernel: [ 355.799338] Modules linked in: xt_physdev iptable_filter ip_tables x_tables tun xen_pciback xen_netback xen_blkback xen_gntalloc xen_gntdev xen_evtchn xenfs ib_iser rdma_cm ib_addr iw_cm ib_cm ib_sa ib_mad ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nfsd nfs nfs_acl auth_rpcgss fscache lockd sunrpc bridge stp loop coretemp iTCO_wdt snd_pcm snd_page_alloc snd_timer i7core_edac edac_core snd soundcore crc32c_intel dcdbas pcspkr joydev evdev iTCO_vendor_support acpi_power_meter wmi button processor thermal_sys ext4 crc16 jbd2 mbcache dm_mod sd_mod crc_t10dif sg sr_mod cdrom usbhid hid ata_generic ehci_hcd ata_piix usbcore mpt2sas libata usb_common raid_class scsi_transport_sas scsi_mod bnx2 [last unloaded: scsi_wait_scan]
Sep  1 14:25:41 testVS01OU kernel: [  355.802032]
Sep 1 14:25:41 testVS01OU kernel: [ 355.802068] Pid: 3795, comm: qemu-system-i38 Not tainted 3.2.0-4-amd64 #1 Debian 3.2.68-1+deb7u3 Dell Inc. PowerEdge T310/02P9X9 Sep 1 14:25:41 testVS01OU kernel: [ 355.802210] RIP: e030:[<ffffffff81030abd>] [<ffffffff81030abd>] ptep_set_access_flags+0x21/0x45 Sep 1 14:25:41 testVS01OU kernel: [ 355.802319] RSP: e02b:ffff880073a33a48 EFLAGS: 00010202 Sep 1 14:25:41 testVS01OU kernel: [ 355.802374] RAX: 80000002c3fd3701 RBX: ffff880072801348 RCX: 80000002c3fd3767 Sep 1 14:25:41 testVS01OU kernel: [ 355.802432] RDX: ffff880003008da8 RSI: 00007ffba1fb5000 RDI: ffff880072801348 Sep 1 14:25:41 testVS01OU kernel: [ 355.802491] RBP: 00007ffba1fb5000 R08: 0000000000000001 R09: ffffea00000a81f0 Sep 1 14:25:41 testVS01OU kernel: [ 355.802549] R10: ffff880002f2cdc0 R11: ffff880002f2cdc0 R12: 0000000000000001 Sep 1 14:25:41 testVS01OU kernel: [ 355.802608] R13: ffffea00000a81c0 R14: 0000000000000702 R15: ffff8800034ee878 Sep 1 14:25:41 testVS01OU kernel: [ 355.802668] FS: 00007ffbc45eb700(0000) GS:ffff88007f280000(0000) knlGS:0000000000000000 Sep 1 14:25:41 testVS01OU kernel: [ 355.802744] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b Sep 1 14:25:41 testVS01OU kernel: [ 355.802800] CR2: ffff880003008da8 CR3: 000000000375e000 CR4: 0000000000002660 Sep 1 14:25:41 testVS01OU kernel: [ 355.802859] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Sep 1 14:25:41 testVS01OU kernel: [ 355.802919] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Sep 1 14:25:41 testVS01OU kernel: [ 355.802978] Process qemu-system-i38 (pid: 3795, threadinfo ffff880073a32000, task ffff88000303d180)
Sep  1 14:25:41 testVS01OU kernel: [  355.803059] Stack:
Sep 1 14:25:41 testVS01OU kernel: [ 355.803109] 000000043f6eb067 ffff880072801348 ffff880072801348 ffff880003008da8 Sep 1 14:25:41 testVS01OU kernel: [ 355.803305] 00007ffba1fb5000 ffffffff810d2050 0000000000000000 000000043f6eb067 Sep 1 14:25:41 testVS01OU kernel: [ 355.803501] 000000043f6eb067 ffffea00000a81f0 80000002c3fd3727 ffff8800030c20c0
Sep  1 14:25:41 testVS01OU kernel: [  355.803697] Call Trace:
Sep 1 14:25:41 testVS01OU kernel: [ 355.803750] [<ffffffff810d2050>] ? handle_pte_fault+0x762/0x7b2 Sep 1 14:25:41 testVS01OU kernel: [ 355.803808] [<ffffffff810cf087>] ? pmd_val+0x7/0x8 Sep 1 14:25:41 testVS01OU kernel: [ 355.803863] [<ffffffff810cf105>] ? pte_offset_kernel+0x16/0x35 Sep 1 14:25:41 testVS01OU kernel: [ 355.803920] [<ffffffff81354173>] ? do_page_fault+0x320/0x345 Sep 1 14:25:41 testVS01OU kernel: [ 355.803979] [<ffffffff8100379c>] ? xen_write_msr_safe+0x73/0xb9 Sep 1 14:25:41 testVS01OU kernel: [ 355.804036] [<ffffffff81003223>] ? xen_end_context_switch+0xe/0x1c Sep 1 14:25:41 testVS01OU kernel: [ 355.804094] [<ffffffff81003ba5>] ? xen_mc_issue.constprop.23+0x31/0x49 Sep 1 14:25:41 testVS01OU kernel: [ 355.804153] [<ffffffff8100d025>] ? paravirt_write_msr+0xb/0xe Sep 1 14:25:41 testVS01OU kernel: [ 355.804211] [<ffffffff8100d6f9>] ? __switch_to+0x18e/0x265 Sep 1 14:25:41 testVS01OU kernel: [ 355.804268] [<ffffffff8102bb5c>] ? pvclock_clocksource_read+0x42/0xb2 Sep 1 14:25:41 testVS01OU kernel: [ 355.804329] [<ffffffff81035c0b>] ? arch_local_irq_enable+0x7/0x8 Sep 1 14:25:41 testVS01OU kernel: [ 355.804388] [<ffffffff810b5379>] ? sleep_on_page+0xa/0xa Sep 1 14:25:41 testVS01OU kernel: [ 355.804444] [<ffffffff810069aa>] ? xen_clocksource_read+0x1d/0x1f Sep 1 14:25:41 testVS01OU kernel: [ 355.804502] [<ffffffff81066439>] ? timekeeping_get_ns+0xd/0x2a Sep 1 14:25:41 testVS01OU kernel: [ 355.804560] [<ffffffff81351715>] ? page_fault+0x25/0x30 Sep 1 14:25:41 testVS01OU kernel: [ 355.804616] [<ffffffff810b4e40>] ? file_read_actor+0x2d/0x127 Sep 1 14:25:41 testVS01OU kernel: [ 355.804673] [<ffffffff810b6ad3>] ? generic_file_aio_read+0x3b2/0x5cf Sep 1 14:25:41 testVS01OU kernel: [ 355.804732] [<ffffffff810624b2>] ? update_rmtp+0x62/0x62 Sep 1 14:25:41 testVS01OU kernel: [ 355.804788] [<ffffffff810faf78>] ? do_sync_read+0xb4/0xec Sep 1 14:25:41 testVS01OU kernel: [ 355.804844] [<ffffffff8106f6ac>] ? do_futex+0xb5/0x80c Sep 1 14:25:41 testVS01OU kernel: [ 355.804901] [<ffffffff810fb663>] ? vfs_read+0x9f/0xe6 Sep 1 14:25:41 testVS01OU kernel: [ 355.804956] [<ffffffff810fb7d3>] ? sys_pread64+0x53/0x6e Sep 1 14:25:41 testVS01OU kernel: [ 355.805012] [<ffffffff813561b2>] ? system_call_fastpath+0x16/0x1b Sep 1 14:25:41 testVS01OU kernel: [ 355.805069] Code: ef 31 f6 5b 5d e9 3b a8 08 00 41 54 55 48 89 f5 53 48 89 fb 48 83 ec 10 48 39 0a 0f 95 c0 45 85 c0 44 0f b6 e0 74 1c 84 c0 74 18 <48> 89 0a 48 8b 3f 66 66 66 90 66 66 90 48 89 ee 48 89 df e8 34 Sep 1 14:25:41 testVS01OU kernel: [ 355.807147] RIP [<ffffffff81030abd>] ptep_set_access_flags+0x21/0x45
Sep  1 14:25:41 testVS01OU kernel: [  355.807234]  RSP <ffff880073a33a48>
Sep  1 14:25:41 testVS01OU kernel: [  355.807286] CR2: ffff880003008da8
Sep 1 14:25:41 testVS01OU kernel: [ 355.807337] ---[ end trace 8ab17cd52c512a17 ]---


If you need more informations and tests tell me and I'll post them.

Thanks for any reply and sorry for my bad english.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.