[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 4/9] ioreq-server: create basic ioreq server abstraction.



>>> On 01.05.14 at 14:08, <paul.durrant@xxxxxxxxxx> wrote:
> Collect together data structures concerning device emulation together into
> a new struct hvm_ioreq_server.
> 
> Code that deals with the shared and buffered ioreq pages is extracted from
> functions such as hvm_domain_initialise, hvm_vcpu_initialise and do_hvm_op
> and consolidated into a set of hvm_ioreq_server manipulation functions. The
> lock in the hvm_ioreq_page served two different purposes and has been
> replaced by separate locks in the hvm_ioreq_server structure.
> 
> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> Cc: Jan Beulich <jbeulich@xxxxxxxx>

>  bool_t hvm_io_pending(struct vcpu *v)
>  {
> -    ioreq_t *p = get_ioreq(v);
> +    struct hvm_ioreq_server *s = v->domain->arch.hvm_domain.ioreq_server;
> +    ioreq_t *p;
>  
> -    if ( !p )
> +    if ( !s )
>          return 0;
>  
> +    p = get_ioreq(s, v);
>      return p->state != STATE_IOREQ_NONE;

I don't think you need the variable "p" here anymore.

> @@ -426,14 +431,6 @@ void hvm_do_resume(struct vcpu *v)
>      }
>  }
>  
> -static void hvm_init_ioreq_page(
> -    struct domain *d, struct hvm_ioreq_page *iorp)
> -{
> -    memset(iorp, 0, sizeof(*iorp));
> -    spin_lock_init(&iorp->lock);
> -    domain_pause(d);

So where is this ...

> @@ -513,22 +507,15 @@ static int hvm_map_ioreq_page(
>      if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
>          return rc;
>  
> -    spin_lock(&iorp->lock);
> -
>      if ( (iorp->va != NULL) || d->is_dying )
>      {
>          destroy_ring_for_helper(&va, page);
> -        spin_unlock(&iorp->lock);
>          return -EINVAL;
>      }
>  
>      iorp->va = va;
>      iorp->page = page;
>  
> -    spin_unlock(&iorp->lock);
> -
> -    domain_unpause(d);

... and this going? Or is the pausing no longer needed for a non-
obvious reason?

> +static int hvm_set_ioreq_pfn(struct domain *d, bool_t buf,
> +                             unsigned long pfn)
> +{
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +    int rc;
> +
> +    spin_lock(&s->lock);
> +
> +    rc = hvm_map_ioreq_page(s, buf, pfn);
> +    if ( rc )
> +        goto fail;
> +
> +    if (!buf) {

Coding style. I'm afraid this isn't the first time I have to make such a
remark on this series. Please check style before submitting.

> +        struct hvm_ioreq_vcpu *sv;
> +
> +        list_for_each_entry ( sv,
> +                              &s->ioreq_vcpu_list,
> +                              list_entry )
> +            hvm_update_ioreq_evtchn(s, sv);
> +    }
> +
> +    spin_unlock(&s->lock);
> +    return 0;
> +
> + fail:
> +    spin_unlock(&s->lock);
> +    return rc;
> +}

If the function isn't going to change significantly in subsequent
patches, there's no real point here in having the "fail" label and
separate error exit path.

> +static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
> +                                     evtchn_port_t *p_port)
> +{
> +    evtchn_port_t old_port, new_port;
> +
> +    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
> +    if ( new_port < 0 )
> +        return new_port;

I'm pretty sure I commented on this too in a previous version:
evtchn_port_t is an unsigned type, hence checking it to be negative
is pointless.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.