[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH] mem_sharing: fix race condition of nominate and unshare



Hi Jui-Hao:
 
    Domain ID is 4.
  
   Well, domain_destroy()->complete_domain_destroy->arch_domain_destroy->
paging_final_teardown()->hap_final_teardown->p2m_teardown->mem_sharing_unshare_page
 
   so it looks like it is possible that domain is destroyed before the handle is removed for hash table.
 
  Further more, I add below code.
637     if(mem_sharing_gfn_account(gfn_get_info(&ce->gfns), 1) == -1){
638         printk("=====client not found, server %d client %d\n", gfn_get_info(&se->gfns)->domain, gfn_get_info(&ce->gfns)->domain);
639         ret = XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID;
640         goto err_out;
641     }
642
643     if(mem_sharing_gfn_account(gfn_get_info(&se->gfns), 1)==-1){                                                                & nbsp;                       
644         printk("=====server not found, server %d client %d\n", gfn_get_info(&se->gfns)->domain, gfn_get_info(&ce->gfns)->domain);
645         ret = XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID;
646         goto err_out;
647     }
 
     those logs are printed out in test, when all domains are destroyed, I print out all hash entry, it is empty,
     so it is correct.
     
     what's your opinion?
 
> Date: Sat, 15 Jan 2011 01:00:27 +0800
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: juihaochiang@xxxxxxxxx
> To: tinnycloud@xxxxxxxxxxx
> CC: xen-devel@xxxxxxxxxxxxxxxxxxx; tim.deegan@xxxxxxxxxx
>
> Hi, all:
>
> Is that possible that the domain is dying?
> In mem_sharing_gfn_account(): could you try the following?
>
> d = get_domain_by_id(gfn->domain);
> if (!d) printk("null domain %x\n", gfn->domain); /* add this line to
> see which domain id you see */
> BUG_ON(!d);
>
> When this domain id printed out, could you check if the printed domain
> id is dying?
> If the domain is dying, then the question seems to be:
> "Given a domain id from the gfn_info, how do we know the domain is
> dying? or we have stored a wrong information inside the hash list?"
>
> 2011/1/14 MaoXiaoyun <tinnycloud@ho tmail.com>:
> > Hi Tim:
> >
> >      Thanks for the patch, xen panic on more stressed test. ( 12HVMS, each
> > of them reboot every 30minutes).
> >      Please refer to below log.
> >
> > blktap_sysfs_create: adding attributes for dev ffff8801044bc400
> > blktap_sysfs_create: adding attributes for dev ffff8801044bc200
> > __ratelimit: 4 callbacks suppressed
> > (XEN) Xen BUG at mem_sharing.c:454
> > (XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
> > (XEN) CPU:    0
> > (XEN) RIP:    e008:[<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> > (XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
> > (XEN) rax: 0000000000000000   rbx: 0000000000000001   rcx: 0000000000000000
> > (X EN) rdx: 0000000000000000   rsi: 000000000000005f   rdi: 000000000000005f
> > (XEN) rbp: ffff8305894f0fc0   rsp: ffff82c48035fc48   r8:  ffff82f600000000
> > (XEN) r9:  00007fffcdbd0fb8   r10: ffff82c4801f8c70   r11: 0000000000000282
> > (XEN) r12: ffff82c48035fe28   r13: ffff8303192a3bf0   r14: ffff82f60b966700
> > (XEN) r15: 0000000000000006   cr0: 0000000080050033   cr4: 00000000000026f0
> > (XEN) cr3: 000000032ea58000   cr2: ffff880119c2e668
> > (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> > (XEN) Xen stack trace from rsp=ffff82c48035fc48:
> > (XEN)    00000000fffffff7 ffff82c4801bf8c0 0000000000553b86 ffff8305894f0fc0
> > (XEN)    ffff8302f4d12cf0 0000000000553b86 ffff 82f603e28580 ffff82c48035fe38
> > (XEN)    ffff83023fe60000 ffff82c48035fe28 0000000000305000 0000000000000006
> > (XEN)    0000000000000006 ffff82c4801c0724 ffff82c4801447da 0000000000553b86
> > (XEN)    000000000001a938 00000000006ee000 00000000006ee000 ffff82c4801457fd
> > (XEN)    0000000000000096 0000000000000001 ffff82c48035fd30 0000000000000080
> > (XEN)    ffff82c480376980 ffff82c480251080 0000000000000292 ffff82c48011c519
> > (XEN)    ffff82c48035fe28 0000000000000080 0000000000000000 ffff8302ef312fa0
> > (XEN)    ffff8300b4aee000 ffff82c48025f080 ffff82c480251080 ffff82c480118351
> > (XEN)    0000000000000080 0000000000000000 ffff8300b4aef708 00000de9e9529c40
> > (XEN)    ffff8300b4aee000 0000000000000292 ffff8305cf9f09b8 0000000000000001
> > (X EN)    0000000000000001 0000000000000000 00000000002159e6 fffffffffffffff3
> > (XEN)    00000000006ee000 ffff82c48035fe28 0000000000305000 0000000000000006
> > (XEN)    0000000000000006 ffff82c480104373 ffff8305cf9f09c0 ffff82c4801a0b63
> > (XEN)    00000000159e6070 ffff8305cf9f0000 0000000000000007 ffff83023fe60180
> > (XEN)    0000000600000039 0000000000000000 00007fae14b30003 000000000054fdad
> > (XEN)    0000000000553b86 ffffffffff600429 000000004d2f26e8 0000000000088742
> > (XEN)    0000000000000000 00007fae14b30070 00007fae14b30000 00007fffcdbd0f50
> > (XEN)    00007fae14b30078 0000000000430e98 00007fffcdbd0fb8 0000000000cd39c8
> > (XEN)    0005aeb700000007 00007fae15bd2ab0 0000000000000000 0000000000000246
> > (XEN) Xen call trace:
> > (XEN) & nbsp;  [<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> > (XEN)    [<ffff82c4801bf8c0>] mem_sharing_share_pages+0x170/0x310
> > (XEN)    [<ffff82c4801c0724>] mem_sharing_domctl+0xe4/0x130
> > (XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> > (XEN)    [<ffff82c4801457fd>] arch_do_domctl+0xdad/0x1f90
> > (XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
> > (XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
> > (XEN)    [<ffff82c480104373>] do_domctl+0x163/0x1000
> > (XEN)    [<ffff82c4801a0b63>] hvm_set_callback_irq_level+0xe3/0x110
> > (XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
> > (XEN)
> > (XEN)
> > (XEN) ****************** **********************
> > (XEN) Panic on CPU 0:
> > (XEN) Xen BUG at mem_sharing.c:454
> > (XEN) ****************************************
> > (XEN)
> > (XEN) Manual reset required ('noreboot' specified)
> >
>
> Bests,
> Jui-Hao
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.