[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] pv_ops 2.6.31.6



On Sat, Jan 23, 2010 at 07:12:12PM -0000, Ian Tobin wrote:
> Good point!
> 
> I think i have overlooked that but i do recall seeing it in the instructions. 
>  Would that slow it down then?
> 

Yeah, at least on my 32b systems it'll make dom0 crash in half an hour or so.. 

CONFIG_HIGHPTE=n has been stable for me.

-- Pasi

> 
> Ian
> 
> 
> -----Original Message-----
> From: Pasi Kärkkäinen [mailto:pasik@xxxxxx] 
> Sent: 23 January 2010 18:32
> To: Ian Tobin
> Cc: Olivier B.; xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-users] pv_ops 2.6.31.6
> 
> On Sat, Jan 23, 2010 at 06:18:05PM -0000, Ian Tobin wrote:
> > Thanks, ill do the test and wait for RAM before doing anything further.  No 
> > errors in dmesg and its a 32bit Dom0.
> >
> 
> Do you have CONFIG_HIGHPTE=n in .config? 
> 
> If it's enabled (=y) then your dom0 kernel will definitely crash.. 
>  
> -- Pasi
> 
> > Ian
> > 
> > 
> > 
> > -----Original Message-----
> > From: Pasi Kärkkäinen [mailto:pasik@xxxxxx] 
> > Sent: 23 January 2010 11:56
> > To: Ian Tobin
> > Cc: Olivier B.; xen-users@xxxxxxxxxxxxxxxxxxx
> > Subject: Re: [Xen-users] pv_ops 2.6.31.6
> > 
> > On Sat, Jan 23, 2010 at 09:38:28AM -0000, Ian Tobin wrote:
> > > Hi,
> > > 
> > > Appologies for the delay.
> > > 
> > > Xm top shows that a DomU (windows 2008) was using 100% cpu however it 
> > > wasn't actually doing anything 
> > > but i killed it off so no DomUs were running and the system was still 
> > > painfully slow until its rebooted but it still slows down.
> > >
> > 
> > So you're saying the system was slow even after the windows 2008 guest was 
> > killed? Something definitely wrong there..
> > Any errors in "xm dmesg" ? or in dom0 "dmesg" ?
> > 
> > Was this 32bit dom0?
> > 
> > > 
> > > I have discovered the RAM is not the recommended RAM kit for this 
> > > motherboard so i have ordered 8gb 
> > > of the correct RAM type.  Do you think this would be the cause?
> > > 
> > 
> > Who knows.. maybe. 
> > 
> > Make sure you run memtest86+ for a long time to make sure the RAM is ok.
> > Sometimes the memory errors will show up only after a couple of _days_ of 
> > running memtest.
> > 
> > http://memtest.org/download/4.00/memtest86+-4.00.iso.zip
> > 
> > -- Pasi
> > 
> > > thanks
> > > 
> > > Ian
> > > -----Original Message-----
> > > From: Pasi Kärkkäinen [mailto:pasik@xxxxxx] 
> > > Sent: 22 January 2010 11:23
> > > To: Ian Tobin
> > > Cc: Olivier B.; xen-users@xxxxxxxxxxxxxxxxxxx
> > > Subject: Re: [Xen-users] pv_ops 2.6.31.6
> > > 
> > > On Fri, Jan 22, 2010 at 10:52:04AM -0000, Ian Tobin wrote:
> > > > Ive got a major performance issue.  If i boot up the dom0 without 
> > > > starting xen and copy a large file to it via ftp its fine.
> > > > 
> > > > As soon as i run xend start the Dom0 gets slower and slower until the 
> > > > server is at a crawl so its unusable.
> > > > 
> > > > Anyone else getting slow response?
> > > > 
> > > 
> > > I haven't seen that.
> > > 
> > > Please monitor your dom0 with "top" and also with "xm top". 
> > > 
> > > What do they reveal? Does some process start eating more and more CPU 
> > > time? 
> > > Does some process leak memory? Does dom0 have iowait? 
> > > 
> > > Did you limit dom0_mem=512M or similar? 
> > > 
> > > -- Pasi
> > > 
> > > > 
> > > > 
> > > > 
> > > > -----Original Message-----
> > > > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> > > > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Olivier B.
> > > > Sent: 22 January 2010 08:34
> > > > To: xen-users@xxxxxxxxxxxxxxxxxxx
> > > > Subject: Re: [Xen-users] pv_ops 2.6.31.6
> > > > 
> > > > For me restore/live migration doesn't work.
> > > > 
> > > > For dom0 I use the version 2.6.31.6 00751-g600545, with vanilla 
> > > > 2.6.31.12 pv_ops version on domU, and debian xen 3.4.2-2
> > > > 
> > > > On restore I obtain that :
> > > > 
> > > > [49532.764004] <4>------------[ cut here ]------------
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] <4>WARNING: at arch/x86/xen/time.c:180 
> > > > xen_sched_clock+0x7c/0xaf()
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] <d>Modules linked in:
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  nf_conntrack_ipv4
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  nf_defrag_ipv4
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  xt_state
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  nf_conntrack
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  dm_snapshot
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [last unloaded: scsi_wait_scan]
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] Pid: 30422, comm: kstop/0 Tainted: G        W  
> > > > 2.6.31.12-dae-xen #1
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] Call Trace:
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8104e0d1>] warn_slowpath_common+0x88/0xb6
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8104e121>] warn_slowpath_null+0x22/0x38
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8100dc4f>] xen_sched_clock+0x7c/0xaf
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8101829d>] sched_clock+0x9/0xd
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8106b73b>] sched_clock_cpu+0xa7/0x168
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81047d4b>] update_rq_clock+0x26/0x48
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81048c11>] try_to_wake_up+0xac/0x2af
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8101135d>] ? retint_restore_args+0x5/0x6
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81048e34>] default_wake_function+0x20/0x36
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8103bcdd>] __wake_up_common+0x58/0xa2
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8106120e>] ? wq_barrier_func+0x0/0x36
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8103e07d>] complete+0x49/0x73
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8106122e>] wq_barrier_func+0x20/0x36
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81060d8d>] worker_thread+0x156/0x20d
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8100dd1f>] ? xen_restore_fl_direct_end+0x0/0x1
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81065b7f>] ? autoremove_wake_function+0x0/0x5a
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81060c37>] ? worker_thread+0x0/0x20d
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81065759>] kthread+0x9b/0xa3
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff810119ea>] child_rip+0xa/0x20
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff81010bac>] ? int_ret_from_sys_call+0x7/0x1b
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff8101135d>] ? retint_restore_args+0x5/0x6
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004]  [<ffffffff810119e0>] ? child_rip+0x0/0x20
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] <4>---[ end trace bbe4ba0e56e4a4ae ]---
> > > > [49532.764004] BUG: recent printk recursion!
> > > > [49532.764004] <4>------------[ cut here ]------------
> > > > 
> > > > 
> > > > But except that, it works fine.
> > > > 
> > > > Olivier
> > > > 
> > > > Pasi Kärkkäinen a écrit :
> > > > > On Thu, Jan 21, 2010 at 09:01:21PM -0000, Ian Tobin wrote:
> > > > >   
> > > > >>    Hi,
> > > > >>
> > > > >>
> > > > >>
> > > > >>    Quick question, is the 2.6.31.6 kernel in Jermeys tree stable 
> > > > >> enough for a
> > > > >>    live environment?  Ive been playing with it and seems to work 
> > > > >> quite well.
> > > > >>
> > > > >>     
> > > > >
> > > > > It works for many people.. so please keep using it, and report any 
> > > > > issues/bugs found. 
> > > > >
> > > > > Also make sure you monitor the changelogs, it's still under 
> > > > > development so you want to
> > > > > upgrade every now and then to get the latest bits.
> > > > >
> > > > > xen/master branch changelog:
> > > > > http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=shortlog;h=xen/master
> > > > >
> > > > > Whole git tree changelog:
> > > > > http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git
> > > > >
> > > > > -- Pasi
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Xen-users mailing list
> > > > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > > > http://lists.xensource.com/xen-users
> > > > >   
> > > > 
> > > > _______________________________________________
> > > > Xen-users mailing list
> > > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > > http://lists.xensource.com/xen-users
> > > > 
> > > > 
> > > > 
> > > > _______________________________________________
> > > > Xen-users mailing list
> > > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > > http://lists.xensource.com/xen-users
> > > 
> > > 
> > 
> > 
> 
> 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.