[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 v3] x86/hvm: remove stray lock release from hvm_ioreq_server_init()



On Fri, Sep 26, 2014 at 02:43:38PM +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Vitaly Kuznetsov [mailto:vkuznets@xxxxxxxxxx]
> > Sent: 26 September 2014 15:21
> > To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Paul Durrant; Ian Campbell; Jan Beulich; Andrew Jones
> > Subject: [PATCH for-4.5 v3] x86/hvm: remove stray lock release from
> > hvm_ioreq_server_init()
> > 
> > If HVM_PARAM_IOREQ_PFN, HVM_PARAM_BUFIOREQ_PFN, or
> > HVM_PARAM_BUFIOREQ_EVTCHN
> > parameters are read when guest domain is dying it leads to the following
> > ASSERT:
> > 
> > (XEN) Assertion '_raw_spin_is_locked(lock)' failed at
> > ...workspace/KERNEL/xen/xen/include/asm/spinlock.h:18
> > (XEN) ----[ Xen-4.5-unstable  x86_64  debug=y  Not tainted ]----
> > ...
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82d08012b07f>] _spin_unlock+0x27/0x30
> > (XEN)    [<ffff82d0801b6103>] hvm_create_ioreq_server+0x3df/0x49a
> > (XEN)    [<ffff82d0801bcceb>] do_hvm_op+0x12bf/0x27a0
> > (XEN)    [<ffff82d08022b9bb>] syscall_enter+0xeb/0x145
> > 
> > The root cause of this issue is the fact that ioreq_server.lock is being
> > released twice - first in hvm_ioreq_server_init() and then in
> > hvm_create_ioreq_server().
> > Drop the lock release from hvm_ioreq_server_init() as we don't take it here,
> > do minor
> > label cleanup.
> > 
> > Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> 
> Looks good to me.
> 
> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

And Released-Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> 
> > ---
> > Changes from v1:
> > - Instead of protecting agains creating ioreq server while guest domain
> >   is dying remove stray ioreq_server.lock lock release
> >   from hvm_ioreq_server_init(). Rename the patch accordingly.
> >   [Paul Durrant]
> > 
> > Changes from v2:
> > - Cleanup labels in hvm_ioreq_server_init(), shorten patch name
> >   [Jan Beulich]
> > ---
> >  xen/arch/x86/hvm/hvm.c | 12 +++++-------
> >  1 file changed, 5 insertions(+), 7 deletions(-)
> > 
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 0a20cbe..94c58e1 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -970,30 +970,28 @@ static int hvm_ioreq_server_init(struct
> > hvm_ioreq_server *s, struct domain *d,
> > 
> >      rc = hvm_ioreq_server_alloc_rangesets(s, is_default);
> >      if ( rc )
> > -        goto fail1;
> > +        return rc;
> > 
> >      rc = hvm_ioreq_server_map_pages(s, is_default, handle_bufioreq);
> >      if ( rc )
> > -        goto fail2;
> > +        goto fail_map;
> > 
> >      for_each_vcpu ( d, v )
> >      {
> >          rc = hvm_ioreq_server_add_vcpu(s, is_default, v);
> >          if ( rc )
> > -            goto fail3;
> > +            goto fail_add;
> >      }
> > 
> >      return 0;
> > 
> > - fail3:
> > + fail_add:
> >      hvm_ioreq_server_remove_all_vcpus(s);
> >      hvm_ioreq_server_unmap_pages(s, is_default);
> > 
> > - fail2:
> > + fail_map:
> >      hvm_ioreq_server_free_rangesets(s, is_default);
> > 
> > - fail1:
> > -    spin_unlock(&d->arch.hvm_domain.ioreq_server.lock);
> >      return rc;
> >  }
> > 
> > --
> > 1.9.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.