[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] [Q] about assign_domain_page_replace



On Wed, Jun 06, 2007 at 04:31:16PM +0900, Akio Takebe wrote:

> >domain_page_flush_and_put() isn't aware of NULL-owner pages, I'll fix it.
> >However more issues seem to be there.

I sent out the patch which fixes p2m exposure issues.
But I don't think that the patch doesn't resolve your issues.
At least it only conceals issues. It's worse than panic.


> I use the following patch.
> diff -r 0cf6b75423e9 xen/arch/ia64/xen/mm.c
> --- a/xen/arch/ia64/xen/mm.c    Mon Jun 04 14:17:54 2007 -0600
> +++ b/xen/arch/ia64/xen/mm.c    Wed Jun 06 18:04:59 2007 +0900
> @@ -1150,6 +1150,16 @@ assign_domain_page_replace(struct domain
>          //   => create_host_mapping()
>          //      => assign_domain_page_replace()
>          if (mfn != old_mfn) {
> +            printk("mfn=0x%016lx, old_mfn=0x%016lx\n", mfn, old_mfn);
> +            printk("%s: old_mfn->count_info=%u\n", __func__, 
> +                     (u32)(mfn_to_page(old_mfn)->count_info&PGC_count_mask));
> +            printk("%s: mfn->count_info=%u\n", __func__, 
> +                     (u32)(mfn_to_page(mfn)->count_info&PGC_count_mask));
> +            printk("%s: old_mfn->u.inuse._domain=%u\n", __func__,
> +                     (u32)(mfn_to_page(old_mfn)->u.inuse._domain));
> +            printk("%s: mfn->u.inuse._domain=%u\n", __func__,
> +                     (u32)(mfn_to_page(mfn)->u.inuse._domain));
> +            dump_stack();
>              domain_put_page(d, mpaddr, pte, old_pte, 1);
>          }
>      }
> 
> The result of booting domU is below.

There seems to be several issues. Hmmmm...
What domains correspond to 32896 and 61177984?
I guesss
  _domain=61177984 => dom0
  _domain=32896    => newly created domain.
Is those correct?


>  (XEN) domain.c:536: arch_domain_create:536 domain 1 pervcpu_vhpt 1           
> (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256              
> (XEN) tlb_track.c:115: hash 0xf000004084af0000 hash_size 512                  
> (XEN) regionreg.c:193: ### domain f0000000040f8080: rid=80000-c0000 
> mp_rid=2000
> (XEN) domain.c:573: arch_domain_create: domain=f0000000040f8080               
> (XEN) mfn=0x000000000102003d, old_mfn=0x0000000001020001                      
> (XEN) assign_domain_page_replace: old_mfn->count_info=0                       
> (XEN) assign_domain_page_replace: mfn->count_info=4                           
> (XEN) assign_domain_page_replace: old_mfn->u.inuse._domain=0                  
> (XEN) assign_domain_page_replace: mfn->u.inuse._domain=32896
> (XEN)                                                                         
> (XEN) Call Trace:                                                             
> (XEN)  [<f0000000040ab1b0>] show_stack+0x80/0xa0                              
> (XEN)                                 sp=f000000007b27c10 bsp=f000000007b215a8
> (XEN)  [<f00000000406f750>] assign_domain_page_replace+0x220/0x260            
> (XEN)                                 sp=f000000007b27de0 bsp=f000000007b21540
> (XEN)  [<f000000004070a20>] __dom0vp_add_physmap+0x330/0x630                  
> (XEN)                                 sp=f000000007b27de0 bsp=f000000007b214d8
> (XEN)  [<f0000000040524a0>] do_dom0vp_op+0x1e0/0x4d0                          
> (XEN)                                 sp=f000000007b27df0 bsp=f000000007b21498
> (XEN)  [<f000000004002e30>] fast_hypercall+0x170/0x340                        
> (XEN)                                 sp=f000000007b27e00 bsp=f000000007b21498


foreign domain page mapping doesn't seem to work correctly.
domain builder seems to map a page which belongs to newly created domain.
If so, it shouldn't be that old_mfn->count_info=0,
old_mfn->u.inuse._domain=0.
Can you confirm the following?

  - Does dom0 issue this hypercall? (i.e. what's currnet domain?)

  - If so, what's pseudo physical address of dom0?
    I guess it is in p2m table table area. Is this correct?
    You can find the area by cat /proc/iomem | grep 'Xen p2m table'
    or finding dom0 kernel boot message like the following

    > Xen p2m: to [0x0000000300000000, 0x0000000303ffc000) (65536 KBytes)

If the above guess is correct, then destroying domain shouldn't cause panic.
Can you check mfn which causes panic?


> (XEN) vcpu.c:1059:d1 vcpu_get_lrr0: Unmasked interrupts unsupported           
> (XEN) vcpu.c:1068:d1 vcpu_get_lrr1: Unmasked interrupts unsupported           
> (XEN) domain.c:943:d1 Domain set shared_info_va to 0xfffffffffff00000         
> (XEN) mfn=0x0000000000061632, old_mfn=0x000000000006708f                      
> (XEN) assign_domain_page_replace: old_mfn->count_info=1                       
> (XEN) assign_domain_page_replace: mfn->count_info=3                           
> (XEN) assign_domain_page_replace: old_mfn->u.inuse._domain=61177984           
> (XEN) assign_domain_page_replace: mfn->u.inuse._domain=32896                  
> (XEN)                                                                         
> (XEN) Call Trace:                                                             
> (XEN)  [<f0000000040ab1b0>] show_stack+0x80/0xa0                              
> (XEN)                                 sp=f000000007b2fbe0 bsp=f000000007b29488
> (XEN)  [<f00000000406f750>] assign_domain_page_replace+0x220/0x260            
> (XEN)                                 sp=f000000007b2fdb0 bsp=f000000007b29420
> (XEN)  [<f000000004070530>] create_grant_host_mapping+0x1d0/0x390             
> (XEN)                                 sp=f000000007b2fdb0 bsp=f000000007b293b8
> (XEN)  [<f000000004021110>] do_grant_table_op+0xcb0/0x3350                    
> (XEN)                                 sp=f000000007b2fdc0 bsp=f000000007b292b0
> (XEN)  [<f000000004002e30>] fast_hypercall+0x170/0x340                        
> (XEN)                                 sp=f000000007b2fe00 bsp=f000000007b292b0
... snip...

The above message implies that something goes worng in dom0.
backend shouldn't issue gran table mapping with pseudo physical address
to which page is already assigned.

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.