[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Ballon Bug in 2.6.27-git11



Hello,

with your patch I didn't get any error, but if I start a machine with
128 MB RAM and increase it per xm mem-set up to 256MB, neither top or
cat /proc/meminfo shows the new memory...

With 2.6.18 this was no problem, is the balloon driver working correctly?

PS: I'm using a 64bit kernel under xen 3.2

My config was attached on the first post...

Thanks and greetings
Torben Viets

Jeremy Fitzhardinge wrote:
> viets@xxxxxxx wrote:
>> Hello,
>>
>> I'm unsure where I sent a bug report for linux kernel 2.6.27-git11.
>>
>> If I'm use the attached config, I get the kernel panic shown in the
>> attached dmesg.
>>
>> Hope this can be fixed.
>>   xen_balloon: Initialising balloon driver.
>> sysfs: duplicate filename 'memory' can not be created
>> ------------[ cut here ]------------
>> WARNING: at fs/sysfs/dir.c:465 sysfs_add_one+0x53/0x60()
>> Pid: 1, comm: swapper Not tainted 2.6.26-git11 #2
>>
>> Call Trace:
>>  [<ffffffff8023285f>] warn_on_slowpath+0x5f/0x90
>>  [<ffffffff8023390e>] printk+0x4e/0x60
>>  [<ffffffff8027f3d4>] kmem_cache_alloc+0x64/0xc0
>>  [<ffffffff802cf450>] sysfs_ilookup_test+0x0/0x10
>>  [<ffffffff802cf6f9>] sysfs_find_dirent+0x29/0x40
>>  [<ffffffff802cf734>] __sysfs_add_one+0x24/0xa0
>>  [<ffffffff802cf803>] sysfs_add_one+0x53/0x60
>>  [<ffffffff802cfe50>] create_dir+0x60/0xb0
>>  [<ffffffff8051c940>] balloon_init+0x0/0x1e0
>>  [<ffffffff802cfed1>] sysfs_create_dir+0x31/0x50
>>  [<ffffffff80370233>] kobject_add_internal+0xe3/0x1c0
>>  [<ffffffff8051c940>] balloon_init+0x0/0x1e0
>>  [<ffffffff8037033c>] kset_register+0x2c/0x50
>>  [<ffffffff8051c9f0>] balloon_init+0xb0/0x1e0
>>  [<ffffffff8051c4a0>] genhd_device_init+0x0/0x60
>>  [<ffffffff8050a9e8>] kernel_init+0x128/0x310
>>  [<ffffffff8020b8e0>] xen_load_sp0+0x90/0xc0
>>  [<ffffffff8020bf65>] xen_mc_flush+0xc5/0x190
>>  [<ffffffff8022e643>] finish_task_switch+0x43/0xc0
>>  [<ffffffff80212079>] child_rip+0xa/0x11
>>  [<ffffffff80211388>] retint_restore_args+0x5/0x20
>>  [<ffffffff803799a0>] dummycon_dummy+0x0/0x10
>>  [<ffffffff803799a0>] dummycon_dummy+0x0/0x10
>>  [<ffffffff8021206f>] child_rip+0x0/0x11
>>
>> ---[ end trace 4eaa2a86a8e2da22 ]---
>> kobject_add_internal failed for memory with -EEXIST, don't try to
>> register things with the same name in the same directory.
>> Pid: 1, comm: swapper Tainted: G        W 2.6.26-git11 #2
>>
>> Call Trace:
>>  [<ffffffff803702c3>] kobject_add_internal+0x173/0x1c0
>>  [<ffffffff8051c940>] balloon_init+0x0/0x1e0
>>  [<ffffffff8037033c>] kset_register+0x2c/0x50
>>  [<ffffffff8051c9f0>] balloon_init+0xb0/0x1e0
>>  [<ffffffff8051c4a0>] genhd_device_init+0x0/0x60
>>  [<ffffffff8050a9e8>] kernel_init+0x128/0x310
>>  [<ffffffff8020b8e0>] xen_load_sp0+0x90/0xc0
>>  [<ffffffff8020bf65>] xen_mc_flush+0xc5/0x190
>>  [<ffffffff8022e643>] finish_task_switch+0x43/0xc0
>>  [<ffffffff80212079>] child_rip+0xa/0x11
>>  [<ffffffff80211388>] retint_restore_args+0x5/0x20
>>  [<ffffffff803799a0>] dummycon_dummy+0x0/0x10
>>  [<ffffffff803799a0>] dummycon_dummy+0x0/0x10
>>  [<ffffffff8021206f>] child_rip+0x0/0x11
>>   
> 
> Thanks for the report.  I was looking at the balloon driver just
> yesterday and fixed this particular error up.  I'll post a balloon
> driver update shortly.
> 
> Thanks,
>    J
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 


-- 
Torben Viets                              w               Viets@xxxxxxx
n@work Internet Informationssysteme GmbH  o      http://support.work.de
Wandalenweg 5                             r   Tel.: +49 40 23 88 09 - 0
D-20097 Hamburg                           k   Fax:  +49 40 23 88 09 -29
HR B 61 668  - Amtsgericht Hamburg                    Gf Jan Diegelmann

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.