[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v18 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf
> Of Paul Durrant
> Sent: 28 March 2018 15:44
> To: 'Jan Beulich' <JBeulich@xxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v18 01/11] x86/hvm/ioreq: maintain an
> array of ioreq servers rather than a list
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > Sent: 26 March 2018 12:22
> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen-
> > devel@xxxxxxxxxxxxxxxxxxxx
> > Subject: Re: [PATCH v18 01/11] x86/hvm/ioreq: maintain an array of ioreq
> > servers rather than a list
> >
> > >>> On 22.03.18 at 12:55, <paul.durrant@xxxxxxxxxx> wrote:
> > > v18:
> > >  - non-trivial re-base.
> > >  - small modification to FOR_EACH... macro to iterate backwards, to main-
> > >    tain a previous undocumented but useful semantic that secondary
> > >    emulators are selected in favour of qemu.
> >
> > If this is intentional (and necessary), I think there should be a
> > code comment saying why (and implicitly preventing people from
> > wanting to change it).
> >
> 
> Ok.
> 
> > I'm also having difficulty to see why that was the case before
> > this patch: hvm_create_ioreq_server() doesn't insert the default
> > one at the list tail. Are you perhaps basing this solely on the
> > assumption that secondary ones would be created after the
> > default one?
> >
> 
> It's not really about default vs. non-default. I was running my toy emulators
> from shell after starting up the guest paused. So I relied on (upstream) qemu
> being the first emulator to create its ioreq server, and then my toy emulator
> would be inserted ahead of it in the list.
> 
> > > @@ -316,7 +344,8 @@ static int hvm_ioreq_server_add_vcpu(struct
> > hvm_ioreq_server *s,
> > >      spin_lock(&s->lock);
> > >
> > >      rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id,
> > > -                                         s->emulator->domain_id, NULL);
> > > +                                         s->emulator->domain_id,
> > > +                                         NULL);
> >
> > Stray change?
> >
> 
> Oh yes.
> 
> > >  int hvm_create_ioreq_server(struct domain *d, bool is_default,
> > >                              int bufioreq_handling, ioservid_t *id)
> > >  {
> > >      struct hvm_ioreq_server *s;
> > > +    unsigned int i;
> > >      int rc;
> > >
> > >      if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
> > >          return -EINVAL;
> > >
> > > -    rc = -ENOMEM;
> > >      s = xzalloc(struct hvm_ioreq_server);
> > >      if ( !s )
> > > -        goto fail1;
> > > +        return -ENOMEM;
> > >
> > >      domain_pause(d);
> > >      spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> > >
> > > -    rc = -EEXIST;
> > > -    if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL
> )
> > > -        goto fail2;
> > > -
> > > -    rc = hvm_ioreq_server_init(s, d, is_default, bufioreq_handling,
> > > -                               next_ioservid(d));
> > > -    if ( rc )
> > > -        goto fail3;
> > > -
> > > -    list_add(&s->list_entry,
> > > -             &d->arch.hvm_domain.ioreq_server.list);
> > > -
> > >      if ( is_default )
> > >      {
> > > -        d->arch.hvm_domain.default_ioreq_server = s;
> > > -        hvm_ioreq_server_enable(s, true);
> > > +        i = DEFAULT_IOSERVID;
> > > +
> > > +        rc = -EEXIST;
> > > +        if ( GET_IOREQ_SERVER(d, i) )
> > > +            goto fail;
> > >      }
> > > +    else
> > > +    {
> > > +        for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
> > > +        {
> > > +            if ( i != DEFAULT_IOSERVID && !GET_IOREQ_SERVER(d, i) )
> > > +                break;
> > > +        }
> > > +
> > > +        rc = -ENOSPC;
> > > +        if ( i >= MAX_NR_IOREQ_SERVERS )
> > > +            goto fail;
> > > +    }
> > > +
> > > +    set_ioreq_server(d, i, s);
> > > +
> > > +    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
> >
> > Is it safe to do the init only after the insertion? I guess all lock-less
> > array traversals happen in context of the guest (which is paused
> > here), but the old code did things the other way around anyway.
> > So unless something breaks with the inverse order, I'd suggest to
> > use that. If the order is required to be the way you have it, I'd
> > again like to suggest to add a comment clarifying this is intentional.
> 
> Yes, it's safe for the reason you mention but I will switch it round since it 
> does
> indeed look suspicious. I don't think there is any reason for it to be ordered
> this way now... it's possible I needed get_ioreq_server() to work correctly
> during some intermediate development step.
> 

No, it still needs to be this way round. I need IS_DEFAULT() to work correctly 
in some sub-functions. I'll add comments.

  Paul

> >
> > > @@ -744,41 +754,38 @@ int hvm_destroy_ioreq_server(struct domain
> *d,
> > ioservid_t id)
> > >      struct hvm_ioreq_server *s;
> > >      int rc;
> > >
> > > -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> > > +    if ( id == DEFAULT_IOSERVID )
> > > +        return -EPERM;
> > >
> > > -    rc = -ENOENT;
> > > -    list_for_each_entry ( s,
> > > -                          &d->arch.hvm_domain.ioreq_server.list,
> > > -                          list_entry )
> > > -    {
> > > -        if ( s == d->arch.hvm_domain.default_ioreq_server )
> > > -            continue;
> > > +    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> > >
> > > -        if ( s->id != id )
> > > -            continue;
> > > +    s = get_ioreq_server(d, id);
> > >
> > > -        rc = -EPERM;
> > > -        if ( s->emulator != current->domain )
> > > -            break;
> > > +    rc = -ENOENT;
> > > +    if ( !s )
> > > +        goto out;
> > >
> > > -        domain_pause(d);
> > > +    ASSERT(!IS_DEFAULT(s));
> > >
> > > -        p2m_set_ioreq_server(d, 0, s);
> > > +    rc = -EPERM;
> > > +    if ( s->emulator != current->domain )
> > > +        goto out;
> > >
> > > -        hvm_ioreq_server_disable(s, false);
> > > +    domain_pause(d);
> > >
> > > -        list_del(&s->list_entry);
> > > +    p2m_set_ioreq_server(d, 0, s);
> > >
> > > -        hvm_ioreq_server_deinit(s, false);
> > > +    hvm_ioreq_server_disable(s);
> > > +    hvm_ioreq_server_deinit(s);
> > >
> > > -        domain_unpause(d);
> > > +    domain_unpause(d);
> > >
> > > -        xfree(s);
> > > +    set_ioreq_server(d, id, NULL);
> >
> > Same here then for the deinit vs list_del() / set_ioreq_server()
> > ordering. Also (but perhaps less relevant) in
> > hvm_destroy_all_ioreq_servers().
> >
> 
> I'll switch it round.
> 
> > > @@ -1168,16 +1155,11 @@ struct hvm_ioreq_server
> > *hvm_select_ioreq_server(struct domain *d,
> > >          addr = p->addr;
> > >      }
> > >
> > > -    list_for_each_entry ( s,
> > > -                          &d->arch.hvm_domain.ioreq_server.list,
> > > -                          list_entry )
> > > +    FOR_EACH_IOREQ_SERVER(d, id, s)
> > >      {
> > >          struct rangeset *r;
> > >
> > > -        if ( s == d->arch.hvm_domain.default_ioreq_server )
> > > -            continue;
> > > -
> > > -        if ( !s->enabled )
> > > +        if ( IS_DEFAULT(s) )
> > >              continue;
> >
> > Do you really mean the "enabled" check to go away here? If so, why?
> >
> 
> No, that check should still be there. I don't know how it got removed.
> 
> > > @@ -1369,13 +1351,13 @@ unsigned int hvm_broadcast_ioreq(ioreq_t
> *p,
> > bool buffered)
> > >  {
> > >      struct domain *d = current->domain;
> > >      struct hvm_ioreq_server *s;
> > > -    unsigned int failed = 0;
> > > +    unsigned int id, failed = 0;
> > >
> > > -    list_for_each_entry ( s,
> > > -                          &d->arch.hvm_domain.ioreq_server.list,
> > > -                          list_entry )
> > > +    FOR_EACH_IOREQ_SERVER(d, id, s)
> > > +    {
> > >          if ( hvm_send_ioreq(s, p, buffered) == X86EMUL_UNHANDLEABLE )
> > >              failed++;
> > > +    }
> >
> > Which in turn makes me wonder - should broadcasts really be sent
> > to disabled servers?
> >
> 
> I can't think of a reason why we'd need them to be. I'll add a check there.
> 
>   Paul
> 
> > Jan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.