[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Load increase after memory upgrade (part2)



Hi Konrad,

 

don't want to be pushy, as I have no real issue. I simply use the Xenified kernel or take the double load.

But I think this mistery is still open. My last status was that the latest patch you produced resulted in a BUG,

so we still have not checked whether our theory is correct.

 

BR,

Carsten.
 

-----Ursprüngliche Nachricht-----
Von: Carsten Schiers <carsten@xxxxxxxxxx>
Gesendet: Mi 29.02.2012 14:01
Betreff: Re: [Xen-devel] Load increase after memory upgrade (part2)
Anlage: debug.log, inline.txt
An: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>;
CC: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx>;

I am very sorry. I accidently started the DomU with the wrong config file, thus it's clear why there is no difference

between the two. And unfortunately, the DomU with the correct config file is having a BUG:

 

 [   14.674883] BUG: unable to handle kernel paging request at ffffc7fffffff000 [   14.674910] IP: [<ffffffff811b4c0b>] swiotlb_bounce+0x2e/0x31 [   14.674930] PGD 0  [   14.674940] Oops: 0002 [#1] SMP  [   14.674952] CPU 0  [   14.674957] Modules linked in: nfsd exportfs nfs lockd fscache auth_rpcgss nfs_acl sunrpc tda10023 budget_av evdev saa7146_vv videodev v4l2_compat_ioctl32 videobuf_dma_sg videobuf_core budget_core snd_pcm dvb_core snd_timer saa7146 snd ttpci_eeprom soundcore snd_page_alloc i2c_core pcspkr ext3 jbd mbcache xen_netfront xen_blkfront [   14.675057]  [   14.675065] Pid: 0, comm: swapper/0 Not tainted 3.2.8-amd64 #1   [   14.675079] RIP: e030:[<ffffffff811b4c0b>]  [<ffffffff811b4c0b>] swiotlb_bounce+0x2e/0x31 [   14.675097] RSP: e02b:ffff880013fabe58  EFLAGS: 00010202 [   14.675106] RAX: ffff880012800000 RBX: 0000000000000001 RCX: 0000000000001000 [   14.675116] RDX: 0000000000001000 RSI: ffff880012800000 RDI: ffffc7fffffff000 [   14.675126] RBP: 0000000000000002 R08: ffffc7fffffff000 R09: ffff880013f98000 [   14.675137] R10: 0000000000000001 R11: ffff880003376000 R12: ffff8800032c5090 [   14.675147] R13: 0000000000000149 R14: ffff8800033e0000 R15: ffffffff81601fd8 [   14.675163] FS:  00007f3ff9893700(0000) GS:ffff880013fa8000(0000) knlGS:0000000000000000 [   14.675175] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b [   14.675184] CR2: ffffc7fffffff000 CR3: 0000000012683000 CR4: 0000000000000660 [   14.675195] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [   14.675205] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [   14.675216] Process swapper/0 (pid: 0, threadinfo ffffffff81600000, task ffffffff8160d020) [   14.675227] Stack: [   14.675232]  ffffffff81211826 ffff880002eda000 0000000000000000 ffffc90000408000 [   14.675251]  00000000000b0150 0000000000000006 ffffffffa013ec4a ffffffff810946cd [   14.675270]  ffffffff81099203 ffff880003376000 0000000000000000 ffff880002eda4b0 [   14.675289] Call Trace: [   14.675295]  <IRQ>  [   14.675307]  [<ffffffff81211826>] ? xen_swiotlb_sync_sg_for_cpu+0x2e/0x47 [   14.675322]  [<ffffffffa013ec4a>] ? vpeirq+0x7f/0x198 [budget_core] [   14.675337]  [<ffffffff810946cd>] ? handle_irq_event_percpu+0x166/0x184 [   14.675350]  [<ffffffff81099203>] ? __rcu_process_callbacks+0x71/0x2f8 [   14.675364]  [<ffffffff8104d175>] ? tasklet_action+0x76/0xc5 [   14.675376]  [<ffffffff8120a9ac>] ? eoi_pirq+0x5b/0x77 [   14.675388]  [<ffffffff8104cbc6>] ? __do_softirq+0xc4/0x1a0 [   14.675400]  [<ffffffff8120a022>] ? __xen_evtchn_do_upcall+0x1c7/0x205 [   14.675412]  [<ffffffff8134b06c>] ? call_softirq+0x1c/0x30 [   14.675425]  [<ffffffff8100fa47>] ? do_softirq+0x3f/0x79 [   14.675436]  [<ffffffff8104c996>] ? irq_exit+0x44/0xb5 [   14.675452]  [<ffffffff8120b032>] ? xen_evtchn_do_upcall+0x27/0x32 [   14.675464]  [<ffffffff8134b0be>] ? xen_do_hypervisor_callback+0x1e/0x30 [   14.675473]  <EOI> 

 

Complete log is attached.

 

BR, Carsten.
 

-----Ursprüngliche Nachricht-----
An: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>;
CC: Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Sander Eikelenboom <linux@xxxxxxxxxxxxxx>;
Von: Carsten Schiers <carsten@xxxxxxxxxx>
Gesendet: Mi 29.02.2012 13:16
Betreff: Re: [Xen-devel] Load increase after memory upgrade (part2)
Anlage: inline.txt

Great news: it works and load is back to normal. In the attached graph you can see the peak

in blue (compilation of the patched 3.2.8 Kernel) and then after 16.00 the going life of the

video DomU. We are below an avaerage of 7% usage (figures are in Permille).


Thanks so much. Is that already "the final patch"?

 

BR, Carsten.

 


 

-----Ursprüngliche Nachricht-----
An: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>;
CC: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx>;
Von: Carsten Schiers <carsten@xxxxxxxxxx>
Gesendet: Di 28.02.2012 15:39
Betreff: Re: [Xen-devel] Load increase after memory upgrade (part2)
Anlage: inline.txt

Well let me check for a longer period of time, and especially, whether the DomU is still

working (can do that only from at home), but load looks pretty well after applying the

patch to 3.2.8 :-D.

 

BR,

Carsten.
 

-----Ursprüngliche Nachricht-----
An: Jan Beulich <JBeulich@xxxxxxxx>;
CC: Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>; Carsten Schiers <carsten@xxxxxxxxxx>; Sander Eikelenboom <linux@xxxxxxxxxxxxxx>;
Von: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Gesendet: Fr 17.02.2012 16:18
Betreff: Re: [Xen-devel] Load increase after memory upgrade (part2)
On Thu, Feb 16, 2012 at 08:56:53AM +0000, Jan Beulich wrote:
> >>> On 15.02.12 at 20:28, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
> >@@ -1550,7 +1552,11 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
> > struct page **pages;
> > unsigned int nr_pages, array_size, i;
> > gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
> >-
> >+ gfp_t dma_mask = gfp_mask & (__GFP_DMA | __GFP_DMA32);
> >+ if (xen_pv_domain()) {
> >+ if (dma_mask == (__GFP_DMA | __GFP_DMA32))
>
> I didn't spot where you force this normally invalid combination, without
> which the change won't affect vmalloc32() in a 32-bit kernel.
>
> >+ gfp_mask &= (__GFP_DMA | __GFP_DMA32);
>
> gfp_mask &= ~(__GFP_DMA | __GFP_DMA32);
>
> Jan

Duh!
Good eyes. Thanks for catching that.

>
> >+ }
> > nr_pages = (area->size - PAGE_SIZE) >> PAGE_SHIFT;
> > array_size = (nr_pages * sizeof(struct page *));
> >
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


E-Mail ist virenfrei.
Von AVG überprüft - www.avg.de
Version: 2012.0.2127 / Virendatenbank: 2411/4932 - Ausgabedatum: 12.04.2012
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.