[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/8] ioreq-server: create basic ioreq server abstraction.



>>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
> +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
> +                                     struct vcpu *v)
> +{
> +    struct hvm_ioreq_vcpu *sv;
> +    int rc;
> +
> +    spin_lock(&s->lock);
> +
> +    sv = xzalloc(struct hvm_ioreq_vcpu);
> +
> +    rc = -ENOMEM;
> +    if ( !sv )
> +        goto fail1;

I don't see why this allocation needs to be done with the lock already
held. For the other (event channel) allocations further down I would
also prefer if you allocated them without holding the lock yet, even
if that means freeing the per-domain one if you find it already set.

> +
> +    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> +    if ( rc < 0 )
> +        goto fail2;
> +
> +    sv->ioreq_evtchn = rc;
> +
> +    if ( v->vcpu_id == 0 )
> +    {

I generally dislike needless dependencies on vCPU 0 being the first
one to make it into any specific function. Can't you check emptiness
of s->ioreq_vcpu_list instead?

> +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
> +                                         struct vcpu *v)
> +{
> +    struct list_head *entry;
> +
> +    spin_lock(&s->lock);
> +
> +    list_for_each ( entry, &s->ioreq_vcpu_list )
> +    {
> +        struct hvm_ioreq_vcpu *sv = container_of(entry, 
> +                                                 struct hvm_ioreq_vcpu, 
> +                                                 list_entry);
> +
> +        if ( sv->vcpu != v )
> +            continue;
> +
> +        list_del_init(&sv->list_entry);
> +
> +        if ( v->vcpu_id == 0 )
> +            free_xen_event_channel(v, s->bufioreq_evtchn);
> +
> +        free_xen_event_channel(v, sv->ioreq_evtchn);
> +
> +        xfree(sv);
> +        break;

Similar comments as above: Try to avoid depending on vCPU 0 being
the last one to be cleaned up (I'm not even certain this is the case),
and try freeing stuff with the lock already dropped.

> +static int hvm_set_ioreq_pfn(struct domain *d, bool_t buf,
> +                             unsigned long pfn)
> +{
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +    int rc;
> +
> +    spin_lock(&s->lock);
> +
> +    rc = hvm_map_ioreq_page(d, iorp, pfn);

While I realize that at this point there's only one server per domain,
the locking and operation still look to be out of sync at the first
glance: As this can't remain that way anyway till the end of the series,
can't this be brought back in sync here right away (whether that's by
passing s instead of d into the function or acquiring the lock only after
the call I don't know offhand)?

> +    if ( rc )
> +        goto fail;
> +
> +    if (!buf) {

Coding style.

> +static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
> +                                     evtchn_port_t *p_port)
> +{
> +    evtchn_port_t old_port, new_port;
> +
> +    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
> +    if ( new_port < 0 )

evtchn_port_t is an unsigned type, so this check won't work.

> +static int hvm_set_dm_domain(struct domain *d, domid_t domid)
> +{
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +    int rc = 0;
> +
> +    spin_lock(&s->lock);
> +    domain_pause(d);

The other way around perhaps?

> +
> +    if ( s->domid != domid ) {

Coding style again.

> +        struct list_head *entry;
> +
> +        list_for_each ( entry, &s->ioreq_vcpu_list )
> +        {
> +            struct hvm_ioreq_vcpu *sv = container_of(entry,
> +                                                     struct hvm_ioreq_vcpu,
> +                                                     list_entry);
> +            struct vcpu *v = sv->vcpu;
> +
> +            if ( v->vcpu_id == 0 ) {

And again; won't make further remarks to this effect.

>  int hvm_domain_initialise(struct domain *d)
>  {
> +    domid_t domid;

Do you really need this new variable, being used just once? (There
is at least one more similar case elsewhere.)

> @@ -1339,30 +1549,10 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: 
> nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> -    dm_domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
> -
> -    /* Create ioreq event channel. */
> -    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: 
> none */
> -    if ( rc < 0 )
> +    rc = hvm_ioreq_server_add_vcpu(s, v);
> +    if ( rc != 0 )

Can this really be > 0 now, and if so is this being handled correctly in
the caller(s)?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.