[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
Tuesday, November 19, 2013, 2:23:35 PM, you wrote: > On Tue, Nov 19, 2013 at 02:02:57PM +0100, Sander Eikelenboom wrote: >> Hi Wei, >> >> I ran into the following problem when trying to boot another guest after >> less than a day of uptime. >> (the system started 15 guests at boot already which went fine). dom0 is >> allocated a fixed 1536M. >> >> Both host as pv guests run the same kernel, some hvm's run a slightly older >> kernel (3.9 f.e.) >> >> The are quite some granttable messages in xl dmesg, i also included these >> and a "vmstat -m" >> > This looks like a normal OOM. alloc_netdev is not able to allocate > xenvif structure, which is only a few hundred / thousand bytes (don't > remember the exact number). > Are you using persistent grant in blkback? I think those grant table > messages come from blkback hording grant references. It also implies it > might be hording Dom0 pages. You can try to turn off persistent grant > and see if it works. Yes it does use persistent grants, at least blkback does, i don't see any reference of netback .. # dmesg | grep persistent [ 130.385948] xen-blkback:ring-ref 8, event-channel 17, protocol 1 (x86_64-abi) persistent grants [ 130.401092] xen-blkback:ring-ref 9, event-channel 18, protocol 1 (x86_64-abi) persistent grants [ 136.414469] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 142.772308] xen-blkback:ring-ref 8, event-channel 17, protocol 1 (x86_64-abi) persistent grants [ 148.152987] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 154.047275] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 160.033898] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 166.474991] xen-blkback:ring-ref 8, event-channel 17, protocol 1 (x86_64-abi) persistent grants [ 172.111850] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 180.454091] xen-blkback:ring-ref 8, event-channel 17, protocol 1 (x86_64-abi) persistent grants [ 186.935862] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 193.496961] xen-blkback:ring-ref 9, event-channel 11, protocol 1 (x86_64-abi) persistent grants [ 200.179170] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 206.476770] xen-blkback:ring-ref 8, event-channel 10, protocol 1 (x86_64-abi) persistent grants [ 259.796801] xen-blkback:ring-ref 8, event-channel 32, protocol 1 (x86_64-abi) persistent grants [ 259.811314] xen-blkback:ring-ref 9, event-channel 33, protocol 1 (x86_64-abi) persistent grants [ 264.968420] xen-blkback:ring-ref 8, event-channel 22, protocol 1 (x86_64-abi) persistent grants hmm naive as i am i thought that would limit the mem usage by re-using the grants (and thus pages). Why does it keep increasing the number of grants ? Doesn't the device rings have a max size, so once that's reached it should stop allocating new and only reuse it persitent grants ? Also it must be a special memtype/size ? Dom0 was only using about 400M of it's 1536M. > Wei. >> -- >> Sander >> >> [54807.299791] xenwatch: page allocation failure: order:4, mode:0x10c0d0 >> [54807.317520] CPU: 5 PID: 54 Comm: xenwatch Not tainted 3.12.0-20131104 #1 >> [54807.334747] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640) , BIOS >> V1.8B1 09/13/2010 >> [54807.351990] 0000000000000000 ffff8800591a3828 ffffffff81a6a8ba >> 0000000000000006 >> [54807.368857] 000000000010c0d0 ffff8800591a38b8 ffffffff81138770 >> 0000000000000004 >> [54807.385539] 000000000010c0d0 ffff8800591a38b8 ffffffff81a661de >> ffff88005fd1c680 >> [54807.401978] Call Trace: >> [54807.418015] [<ffffffff81a6a8ba>] dump_stack+0x4f/0x84 >> [54807.433971] [<ffffffff81138770>] warn_alloc_failed+0xf0/0x140 >> [54807.449644] [<ffffffff81a661de>] ? >> __alloc_pages_direct_compact+0x1ac/0x1be >> [54807.465164] [<ffffffff8113bfaa>] __alloc_pages_nodemask+0x7aa/0x9d0 >> [54807.480510] [<ffffffff810ed069>] ? trace_hardirqs_off_caller+0xb9/0x160 >> [54807.495622] [<ffffffff81175277>] alloc_pages_current+0xb7/0x180 >> [54807.510530] [<ffffffff81138059>] __get_free_pages+0x9/0x40 >> [54807.525185] [<ffffffff8117cbdc>] __kmalloc+0x19c/0x1c0 >> [54807.539538] [<ffffffff8190e9b4>] alloc_netdev_mqs+0x64/0x340 >> [54807.553814] [<ffffffff8192ac20>] ? alloc_etherdev_mqs+0x20/0x20 >> [54807.567777] [<ffffffff816dc3e4>] xenvif_alloc+0x64/0x2c0 >> [54807.581473] [<ffffffff816dbc57>] netback_probe+0x287/0x2d0 >> [54807.594971] [<ffffffff814bfe46>] xenbus_dev_probe+0x66/0x110 >> [54807.608231] [<ffffffff81615105>] driver_probe_device+0x75/0x210 >> [54807.621227] [<ffffffff81615350>] ? __driver_attach+0xb0/0xb0 >> [54807.634071] [<ffffffff8161539b>] __device_attach+0x4b/0x60 >> [54807.646626] [<ffffffff8161316e>] bus_for_each_drv+0x4e/0xa0 >> [54807.658918] [<ffffffff81615058>] device_attach+0x98/0xb0 >> [54807.671253] [<ffffffff816144b0>] bus_probe_device+0xb0/0xe0 >> [54807.683379] [<ffffffff81612277>] device_add+0x3b7/0x700 >> [54807.695145] [<ffffffff8161ca1d>] ? device_pm_sleep_init+0x4d/0x80 >> [54807.706824] [<ffffffff816125d9>] device_register+0x19/0x20 >> [54807.718145] [<ffffffff814bf9b1>] xenbus_probe_node+0x141/0x170 >> [54807.729256] [<ffffffff81613236>] ? bus_for_each_dev+0x76/0xa0 >> [54807.740091] [<ffffffff814bfbb0>] xenbus_dev_changed+0x1d0/0x1e0 >> [54807.750811] [<ffffffff814bff26>] backend_changed+0x16/0x20 >> [54807.761256] [<ffffffff814bdf3e>] xenwatch_thread+0x4e/0x140 >> [54807.771371] [<ffffffff810bc1e0>] ? __init_waitqueue_head+0x60/0x60 >> [54807.781443] [<ffffffff814bdef0>] ? xs_watch+0x60/0x60 >> [54807.791310] [<ffffffff810bb716>] kthread+0xd6/0xe0 >> [54807.800794] [<ffffffff81a752bb>] ? _raw_spin_unlock_irq+0x2b/0x70 >> [54807.810150] [<ffffffff810bb640>] ? __init_kthread_worker+0x70/0x70 >> [54807.819526] [<ffffffff81a762cc>] ret_from_fork+0x7c/0xb0 >> [54807.828638] [<ffffffff810bb640>] ? __init_kthread_worker+0x70/0x70 >> [54807.837482] Mem-Info: >> [54807.846011] Node 0 DMA per-cpu: >> [54807.854277] CPU 0: hi: 0, btch: 1 usd: 0 >> [54807.862404] CPU 1: hi: 0, btch: 1 usd: 0 >> [54807.870241] CPU 2: hi: 0, btch: 1 usd: 0 >> [54807.877985] CPU 3: hi: 0, btch: 1 usd: 0 >> [54807.885344] CPU 4: hi: 0, btch: 1 usd: 0 >> [54807.892389] CPU 5: hi: 0, btch: 1 usd: 0 >> [54807.899102] Node 0 DMA32 per-cpu: >> [54807.905665] CPU 0: hi: 186, btch: 31 usd: 68 >> [54807.911985] CPU 1: hi: 186, btch: 31 usd: 6 >> [54807.917964] CPU 2: hi: 186, btch: 31 usd: 149 >> [54807.923666] CPU 3: hi: 186, btch: 31 usd: 82 >> [54807.929031] CPU 4: hi: 186, btch: 31 usd: 169 >> [54807.934293] CPU 5: hi: 186, btch: 31 usd: 0 >> [54807.939207] active_anon:12850 inactive_anon:8944 isolated_anon:0 >> [54807.939207] active_file:70321 inactive_file:177850 isolated_file:0 >> [54807.939207] unevictable:562 dirty:34 writeback:0 unstable:0 >> [54807.939207] free:31143 slab_reclaimable:21805 slab_unreclaimable:12717 >> [54807.939207] mapped:3344 shmem:276 pagetables:1211 bounce:0 >> [54807.939207] free_cma:0 >> [54807.966465] Node 0 DMA free:5644kB min:52kB low:64kB high:76kB >> active_anon:0kB inactive_anon:4kB active_file:4kB inactive_file:0kB >> unevictable:24kB isolated(anon):0kB isolated(file):0kB present:15968kB >> managed:15884kB mlocked:24kB dirty:0kB writeback:0kB mapped:24kB shmem:0kB >> slab_reclaimable:8992kB slab_unreclaimable:656kB kernel_stack:16kB >> pagetables:56kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB >> pages_scanned:0 all_unreclaimable? no >> [54807.981676] lowmem_reserve[]: 0 1395 1395 1395 >> [54807.987015] Node 0 DMA32 free:118928kB min:4748kB low:5932kB high:7120kB >> active_anon:51028kB inactive_anon:35772kB active_file:281280kB >> inactive_file:711400kB unevictable:2224kB isolated(anon):0kB >> isolated(file):0kB present:1556480kB managed:1433216kB mlocked:2224kB >> dirty:136kB writeback:0kB mapped:13168kB shmem:1104kB >> slab_reclaimable:78228kB slab_unreclaimable:50212kB kernel_stack:3256kB >> pagetables:4788kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB >> pages_scanned:0 all_unreclaimable? no >> [54808.011107] lowmem_reserve[]: 0 0 0 0 >> [54808.017244] Node 0 DMA: 19*4kB (UEM) 18*8kB (UEM) 15*16kB (UEM) 14*32kB >> (UEM) 8*64kB (UEM) 7*128kB (UEM) 3*256kB (U) 1*512kB (U) 0*1024kB 1*2048kB >> (R) 0*4096kB = 5644kB >> [54808.030315] Node 0 DMA32: 11490*4kB (UEM) 6743*8kB (UEM) 1133*16kB (EMR) >> 6*32kB (MR) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB >> 0*4096kB = 118608kB >> [54808.044303] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 >> hugepages_size=2048kB >> [54808.051507] 249203 total pagecache pages >> [54808.058647] 201 pages in swap cache >> [54808.065986] Swap cache stats: add 2902, delete 2701, find 62055/62191 >> [54808.073216] Free swap = 2092752kB >> [54808.080397] Total swap = 2097148kB >> [54808.093896] 393215 pages RAM >> [54808.101004] 30940 pages reserved >> [54808.108168] 339108 pages shared >> [54808.115202] 258777 pages non-shared >> [54808.122235] xen_netback:xenvif_alloc: Could not allocate netdev for >> vif16.0 >> [54808.129391] vif vif-16-0: 12 creating interface >> >> >> ~# vmstat -m >> Cache Num Total Size Pages >> ext4_groupinfo_4k 24426 24426 216 18 >> ceph_osd_request 0 0 960 17 >> xt_hashlimit 0 0 136 30 >> nf_conntrack_ffffffff822d2900 1950 2100 400 20 >> nf_conntrack_expect 0 0 288 28 >> dm_snap_pending_exception 2496 2496 104 39 >> kcopyd_job 91 133 4176 7 >> dm_rq_target_io 0 0 416 19 >> search 0 0 616 26 >> blkif_cache 108 108 864 18 >> cfq_io_cq 204 204 120 34 >> cfq_queue 170 170 232 17 >> bsg_cmd 0 0 312 26 >> ceph_cap 0 0 128 32 >> ceph_inode_info 0 0 2176 15 >> gfs2_mblk 0 0 432 18 >> gfs2_bufdata 0 0 88 46 >> gfs2_inode 0 0 1280 25 >> gfs2_glock(aspace) 0 0 960 17 >> gfs2_glock 0 0 592 27 >> btrfs_delayed_data_ref 0 0 96 42 >> btrfs_delayed_ref_head 0 0 232 17 >> btrfs_delayed_node 0 0 392 20 >> btrfs_ordered_extent 0 0 472 17 >> btrfs_extent_buffer 0 0 568 28 >> btrfs_delalloc_work 0 0 184 22 >> btrfs_path 0 0 144 28 >> btrfs_transaction 0 0 600 27 >> btrfs_trans_handle 0 0 160 25 >> btrfs_inode 0 0 1944 16 >> fuse_request 102 102 480 17 >> fuse_inode 336 476 1152 28 >> ntfs_big_inode_cache 0 0 1664 19 >> ntfs_inode_cache 0 0 696 23 >> cifs_small_rq 36 36 448 18 >> cifs_request 4 4 16512 1 >> cifs_inode_cache 0 0 1112 29 >> isofs_inode_cache 0 0 960 17 >> fat_inode_cache 0 0 1168 28 >> fat_cache 0 0 40 102 >> hugetlbfs_inode_cache 16 16 976 16 >> jbd2_transaction_s 150 150 320 25 >> jbd2_journal_handle 306 306 80 51 >> journal_handle 0 0 56 73 >> journal_head 1518 1800 112 36 >> revoke_table 1536 1536 16 256 >> revoke_record 768 768 32 128 >> ext4_inode_cache 30702 30952 1704 19 >> ext4_free_data 768 768 64 64 >> ext4_allocation_context 180 180 136 30 >> ext4_prealloc_space 156 156 152 26 >> ext4_io_end 336 336 72 56 >> ext4_extent_status 7361 7446 40 102 >> ext3_inode_cache 0 0 1312 24 >> ext3_xattr 3726 3956 88 46 >> dquot 0 0 384 21 >> kioctx 0 0 896 18 >> pid_namespace 0 0 2208 14 >> posix_timers_cache 108 108 296 27 >> UNIX 198 198 1472 22 >> Cache Num Total Size Pages >> UDP-Lite 0 0 1280 25 >> ip_fib_trie 438 438 56 73 >> PING 0 0 1216 26 >> UDP 150 150 1280 25 >> tw_sock_TCP 399 399 192 21 >> TCP 104 104 2368 13 >> fscache_cookie_jar 0 0 192 21 >> sgpool-128 54 78 5120 6 >> sgpool-64 72 72 2560 12 >> sgpool-32 325 325 1280 25 >> sgpool-16 277 350 640 25 >> blkdev_integrity 0 0 112 36 >> blkdev_queue 165 165 2872 11 >> blkdev_requests 315 315 376 21 >> fsnotify_event_holder 0 0 24 170 >> sock_inode_cache 272 272 960 17 >> file_lock_cache 102 102 240 17 >> shmem_inode_cache 2019 2072 1144 28 >> Acpi-State 255 255 80 51 >> Acpi-Namespace 1428 1428 40 102 >> task_delay_info 1704 1704 168 24 >> taskstats 144 144 328 24 >> proc_inode_cache 1632 1664 976 16 >> sigqueue 250 250 160 25 >> bdev_cache 192 192 1344 24 >> sysfs_dir_cache 23817 23968 144 28 >> filp 1807 2400 320 25 >> inode_cache 1861 2244 912 17 >> dentry 27252 29440 248 16 >> buffer_head 67283 102063 104 39 >> vm_area_struct 4527 4708 184 22 >> mm_struct 308 308 1152 28 >> files_cache 253 253 704 23 >> signal_cache 667 667 1408 23 >> sighand_cache 407 490 2240 14 >> task_struct 430 442 4240 7 >> anon_vma 4676 4676 144 28 >> shared_policy_node 850 850 48 85 >> numa_policy 3528 3528 72 56 >> radix_tree_node 7622 8050 568 28 >> idr_layer_cache 240 240 2112 15 >> dma-kmalloc-8192 0 0 8192 4 >> dma-kmalloc-4096 0 0 4096 8 >> dma-kmalloc-2048 0 0 2048 16 >> dma-kmalloc-1024 0 0 1024 16 >> dma-kmalloc-512 0 0 512 16 >> dma-kmalloc-256 0 0 256 16 >> dma-kmalloc-128 0 0 128 32 >> dma-kmalloc-64 0 0 64 64 >> dma-kmalloc-32 0 0 32 128 >> dma-kmalloc-16 0 0 16 256 >> dma-kmalloc-8 0 0 8 512 >> dma-kmalloc-192 0 0 192 21 >> dma-kmalloc-96 0 0 96 42 >> kmalloc-8192 32 32 8192 4 >> kmalloc-4096 835 916 4096 8 >> kmalloc-2048 773 832 2048 16 >> kmalloc-1024 2335 2336 1024 16 >> kmalloc-512 1488 1536 512 16 >> kmalloc-256 2535 2784 256 16 >> Cache Num Total Size Pages >> kmalloc-192 69883 70707 192 21 >> kmalloc-128 2603 4992 128 32 >> kmalloc-96 5628 5628 96 42 >> kmalloc-64 41984 41984 64 64 >> kmalloc-32 154792 156288 32 128 >> kmalloc-16 6343 7936 16 256 >> kmalloc-8 8192 8192 8 512 >> kmem_cache_node 256 256 128 32 >> kmem_cache 176 176 256 16 >> >> >> (XEN) [2013-11-18 21:44:32] grant_table.c:1249:d1 Expanding dom (1) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-18 21:44:32] grant_table.c:289:d0 Increased maptrack size to >> 8 frames >> (XEN) [2013-11-18 21:49:28] grant_table.c:1249:d1 Expanding dom (1) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 00:00:26] grant_table.c:1249:d1 Expanding dom (1) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 00:00:26] grant_table.c:1249:d1 Expanding dom (1) grant >> table from (7) to (8) frames. >> (XEN) [2013-11-19 00:00:26] grant_table.c:289:d0 Increased maptrack size to >> 9 frames >> (XEN) [2013-11-19 00:00:26] grant_table.c:289:d0 Increased maptrack size to >> 10 frames >> (XEN) [2013-11-19 00:00:42] grant_table.c:1249:d1 Expanding dom (1) grant >> table from (8) to (9) frames. >> (XEN) [2013-11-19 00:01:02] grant_table.c:1249:d1 Expanding dom (1) grant >> table from (9) to (10) frames. >> (XEN) [2013-11-19 00:09:27] grant_table.c:1249:d1 Expanding dom (1) grant >> table from (10) to (11) frames. >> (XEN) [2013-11-19 00:09:27] grant_table.c:289:d0 Increased maptrack size to >> 11 frames >> (XEN) [2013-11-19 04:15:26] grant_table.c:289:d0 Increased maptrack size to >> 12 frames >> (XEN) [2013-11-19 04:15:28] grant_table.c:1249:d12 Expanding dom (12) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:15:40] grant_table.c:289:d0 Increased maptrack size to >> 13 frames >> (XEN) [2013-11-19 04:15:47] grant_table.c:1249:d10 Expanding dom (10) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:15:47] grant_table.c:1249:d10 Expanding dom (10) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:15:47] grant_table.c:289:d0 Increased maptrack size to >> 14 frames >> (XEN) [2013-11-19 04:15:52] grant_table.c:1249:d5 Expanding dom (5) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:15:52] grant_table.c:1249:d5 Expanding dom (5) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:15:52] grant_table.c:1249:d5 Expanding dom (5) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:15:52] grant_table.c:289:d0 Increased maptrack size to >> 15 frames >> (XEN) [2013-11-19 04:15:52] grant_table.c:289:d0 Increased maptrack size to >> 16 frames >> (XEN) [2013-11-19 04:15:54] grant_table.c:1249:d8 Expanding dom (8) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:15:54] grant_table.c:1249:d8 Expanding dom (8) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:15:54] grant_table.c:1249:d8 Expanding dom (8) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:15:54] grant_table.c:289:d0 Increased maptrack size to >> 17 frames >> (XEN) [2013-11-19 04:15:54] grant_table.c:289:d0 Increased maptrack size to >> 18 frames >> (XEN) [2013-11-19 04:15:56] grant_table.c:1249:d2 Expanding dom (2) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:15:56] grant_table.c:1249:d2 Expanding dom (2) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:15:56] grant_table.c:1249:d2 Expanding dom (2) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:15:56] grant_table.c:289:d0 Increased maptrack size to >> 19 frames >> (XEN) [2013-11-19 04:15:57] grant_table.c:1249:d3 Expanding dom (3) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:15:57] grant_table.c:1249:d3 Expanding dom (3) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:15:57] grant_table.c:1249:d3 Expanding dom (3) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:15:57] grant_table.c:289:d0 Increased maptrack size to >> 20 frames >> (XEN) [2013-11-19 04:16:00] grant_table.c:289:d0 Increased maptrack size to >> 21 frames >> (XEN) [2013-11-19 04:16:00] grant_table.c:1249:d13 Expanding dom (13) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:16:00] grant_table.c:1249:d13 Expanding dom (13) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:16:00] grant_table.c:289:d0 Increased maptrack size to >> 22 frames >> (XEN) [2013-11-19 04:16:00] grant_table.c:1249:d13 Expanding dom (13) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:16:03] grant_table.c:1249:d9 Expanding dom (9) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:16:03] grant_table.c:1249:d9 Expanding dom (9) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:16:03] grant_table.c:1249:d9 Expanding dom (9) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:16:03] grant_table.c:289:d0 Increased maptrack size to >> 23 frames >> (XEN) [2013-11-19 04:16:03] grant_table.c:289:d0 Increased maptrack size to >> 24 frames >> (XEN) [2013-11-19 04:16:03] grant_table.c:1249:d4 Expanding dom (4) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:16:03] grant_table.c:1249:d4 Expanding dom (4) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:16:03] grant_table.c:1249:d4 Expanding dom (4) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:16:03] grant_table.c:289:d0 Increased maptrack size to >> 25 frames >> (XEN) [2013-11-19 04:16:03] grant_table.c:289:d0 Increased maptrack size to >> 26 frames >> (XEN) [2013-11-19 04:16:06] grant_table.c:1249:d10 Expanding dom (10) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:16:22] grant_table.c:1249:d12 Expanding dom (12) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:16:22] grant_table.c:1249:d12 Expanding dom (12) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 04:16:22] grant_table.c:289:d0 Increased maptrack size to >> 27 frames >> (XEN) [2013-11-19 04:16:23] grant_table.c:289:d0 Increased maptrack size to >> 28 frames >> (XEN) [2013-11-19 04:16:24] grant_table.c:1249:d11 Expanding dom (11) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 04:16:24] grant_table.c:1249:d11 Expanding dom (11) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:16:24] grant_table.c:289:d0 Increased maptrack size to >> 29 frames >> (XEN) [2013-11-19 04:16:27] grant_table.c:1249:d7 Expanding dom (7) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 04:16:27] grant_table.c:1249:d7 Expanding dom (7) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 05:15:04] grant_table.c:289:d0 Increased maptrack size to >> 30 frames >> (XEN) [2013-11-19 05:15:05] grant_table.c:1249:d6 Expanding dom (6) grant >> table from (4) to (5) frames. >> (XEN) [2013-11-19 05:15:05] grant_table.c:1249:d6 Expanding dom (6) grant >> table from (5) to (6) frames. >> (XEN) [2013-11-19 05:15:05] grant_table.c:1249:d6 Expanding dom (6) grant >> table from (6) to (7) frames. >> (XEN) [2013-11-19 05:15:05] grant_table.c:289:d0 Increased maptrack size to >> 31 frames >> (XEN) [2013-11-19 12:49:37] AMD-Vi: Share p2m table with iommu: p2m table = >> 0x52d3c5 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |