[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Deadlocks by p2m_lock and event_lock



> Hi, Tim and Andres
> The patch fix part of this issue. In handle_mmio, function hvmemul_do_io()
> is called and p2m lock was held again by calling get_gfn_unshare(), still
> trigger a deadlocks.

Typically hvmemul_do_io gets the zero gfn, because in many cases that's
the 'rma_gpa' it is passed. However, in the case of mmio, and particularly
stdvga, ram_gpa is the data to be copied to the framebuffer. So it is in
principle ok to get_gfn in hvmemul_do_io.

There are two solutions
1. msix_capability_init does not call p2m_change_entry_type_global. See my
previous email. If we want to resync the
EPT/NPT/traditional/VTD/IOMMU/superduper TLBs, we can just do that
explicitly. I hope.

2. hvmemul_do_io does gets a ref on the mfn underlying ram_gpa, and holds
that for the critical section, instead of the p2m lock. One way to achieve
this is

    /* Check for paged out page */
    ram_mfn = get_gfn_unshare(curr->domain, ram_gfn, &p2mt);
    if ( this or that )
    { ... handle ... }
    if ( mfn_valid(ram_mfn) )
        get_page(mfn_to_page(ram_mfn, curr->domain));
    put_gfn(curr->domain, ram_gfn)

    /* replace all put_gfn in all exit paths by put_page */

This will ensure the target page is live and sane while not holding the
p2m lock. Xudong, did that make sense? Do you think you could try that and
report back?

Thanks!
Andres

>
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4801261a3>] _spin_lock+0x1b/0xa8
> (XEN)    [<ffff82c4801070d3>] notify_via_xen_event_channel+0x21/0x106
> (XEN)    [<ffff82c4801b6883>] hvm_buffered_io_send+0x1f1/0x21b
> (XEN)    [<ffff82c4801bbd3a>] stdvga_intercept_mmio+0x491/0x4c7
> (XEN)    [<ffff82c4801b5d58>] hvm_io_intercept+0x218/0x244
> (XEN)    [<ffff82c4801aa931>] hvmemul_do_io+0x55a/0x716
> (XEN)    [<ffff82c4801aab1a>] hvmemul_do_mmio+0x2d/0x2f
> (XEN)    [<ffff82c4801ab239>] hvmemul_write+0x181/0x1a2
> (XEN)    [<ffff82c4801963f0>] x86_emulate+0xcad3/0xfbdf
> (XEN)    [<ffff82c4801a9d2e>] hvm_emulate_one+0x120/0x1af
> (XEN)    [<ffff82c4801b63cb>] handle_mmio+0x4e/0x1d1
> (XEN)    [<ffff82c4801afd72>] hvm_hap_nested_page_fault+0x210/0x37f
> (XEN)    [<ffff82c4801d2419>] vmx_vmexit_handler+0x1523/0x17d0
>
> Thanks,
> -Xudong
>
>> -----Original Message-----
>> From: Tim Deegan [mailto:tim@xxxxxxx]
>> Sent: Saturday, March 10, 2012 12:56 AM
>> To: Andres Lagar-Cavilla
>> Cc: Hao, Xudong; Keir Fraser; xen-devel@xxxxxxxxxxxxxxxxxxx; Zhang,
>> Xiantao;
>> JBeulich@xxxxxxxx
>> Subject: Re: [Xen-devel] Deadlocks by p2m_lock and event_lock
>>
>> At 08:29 -0800 on 09 Mar (1331281767), Andres Lagar-Cavilla wrote:
>> > >> I don't know about the event lock, but it seems unwise to call in
>> > >> to handle_mmio with a gfn lock held.  How about fixing the other
>> path?
>> > >>
>> > >> diff -r 04673ecb9d78 xen/arch/x86/hvm/hvm.c
>> > >> --- a/xen/arch/x86/hvm/hvm.c    Thu Mar 08 16:40:05 2012 +0000
>> > >> +++ b/xen/arch/x86/hvm/hvm.c    Fri Mar 09 11:15:25 2012 +0000
>> > >> @@ -1324,10 +1324,11 @@ int hvm_hap_nested_page_fault(unsigned l
>> > >>      if ( (p2mt == p2m_mmio_dm) ||
>> > >>           (access_w && (p2mt == p2m_ram_ro)) )
>> > >>      {
>> > >> +        put_gfn(p2m->domain, gfn);
>> > >>          if ( !handle_mmio() )
>> > >>              hvm_inject_exception(TRAP_gp_fault, 0, 0);
>> > >>          rc = 1;
>> > >> -        goto out_put_gfn;
>> > >> +        goto out;
>> > >>      }
>> > >>
>> > >>  #ifdef __x86_64__
>> > >> @@ -1379,6 +1380,7 @@ int hvm_hap_nested_page_fault(unsigned l
>> > >>
>> > >>  out_put_gfn:
>> > >>      put_gfn(p2m->domain, gfn);
>> > >> +out:
>> > >>      if ( paged )
>> > >>          p2m_mem_paging_populate(v->domain, gfn);
>> > >>      if ( req_ptr )
>> > >
>> > > Yes, that's fine to release the p2m lock earlier than handle_mmio.
>> >
>> > Ack
>>
>> OK, applied.
>>
>> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.