[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/6] ioreq-server: centralize access to ioreq structures


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Tue, 4 Mar 2014 17:25:39 +0000
  • Accept-language: en-GB, en-US
  • Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 04 Mar 2014 17:25:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPN56V2tZIy+t5/UubvaVbcYP2uZrQyH2AgABk73A=
  • Thread-topic: [Xen-devel] [PATCH v2 1/6] ioreq-server: centralize access to ioreq structures

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 04 March 2014 12:21
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v2 1/6] ioreq-server: centralize access to
> ioreq structures
> 
> >>> On 04.03.14 at 12:40, Paul Durrant <paul.durrant@xxxxxxxxxx> wrote:
> > @@ -232,20 +211,15 @@ static int hvmemul_do_io(
> >              vio->io_state = HVMIO_handle_mmio_awaiting_completion;
> >          break;
> >      case X86EMUL_UNHANDLEABLE:
> > -        /* If there is no backing DM, just ignore accesses */
> > -        if ( !has_dm )
> > +        rc = X86EMUL_RETRY;
> > +        if ( !hvm_send_assist_req(curr, p) )
> >          {
> >              rc = X86EMUL_OKAY;
> >              vio->io_state = HVMIO_none;
> >          }
> > -        else
> > -        {
> > -            rc = X86EMUL_RETRY;
> > -            if ( !hvm_send_assist_req(curr) )
> > -                vio->io_state = HVMIO_none;
> > -            else if ( p_data == NULL )
> > -                rc = X86EMUL_OKAY;
> > -        }
> > +        else if ( p_data == NULL )
> > +            rc = X86EMUL_OKAY;
> > +
> >          break;
> 
> Is the rc value change here really intentional? Previously, when
> !hvm_send_assist_req(), rc would end up being X86EMUL_RETRY,
> while now it gets set to X86EMUL_OKAY. After all, there were
> three different paths originally (setting rc to X86EMUL_OKAY
> and/or setting vio->io_state to HVMIO_none, and you can't
> express this with the bool_t return type of hvm_send_assist_req().
> 

It was intentional, but I'm no longer sure it's the right thing to do. I'll 
re-work things to stick more closely to the original code by introducing a a 
new hvm_has_dm() function to replace the stack boolean.

> > +bool_t hvm_io_pending(struct vcpu *v)
> > +{
> > +    ioreq_t *p;
> > +
> > +    if ( !(p = get_ioreq(v)) )
> 
> I'd prefer if you used the call to get_ioreq() as initializer instead of
> in a parenthesized assignment inside a conditional. But yes, it's a
> matter of taste...
> 

Ok. Fair enough.

> > @@ -1407,7 +1425,86 @@ void hvm_vcpu_down(struct vcpu *v)
> >      }
> >  }
> >
> > -bool_t hvm_send_assist_req(struct vcpu *v)
> > +int hvm_buffered_io_send(ioreq_t *p)
> 
> const ioreq_t *?
> 

Yes.

> > +{
> > +    struct vcpu *v = current;
> 
> "curr" please.
> 

Ok.

> > +    struct hvm_ioreq_page *iorp = &v->domain-
> >arch.hvm_domain.buf_ioreq;
> > +    buffered_iopage_t *pg = iorp->va;
> > +    buf_ioreq_t bp;
> > +    /* Timeoffset sends 64b data, but no address. Use two consecutive
> slots. */
> > +    int qw = 0;
> > +
> > +    /* Ensure buffered_iopage fits in a page */
> > +    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> > +
> > +    /*
> > +     * Return 0 for the cases we can't deal with:
> > +     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> > +     *  - we cannot buffer accesses to guest memory buffers, as the guest
> > +     *    may expect the memory buffer to be synchronously accessed
> > +     *  - the count field is usually used with data_is_ptr and since we 
> > don't
> > +     *    support data_is_ptr we do not waste space for the count field
> either
> > +     */
> > +    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> > +        return 0;
> > +
> > +    bp.type = p->type;
> > +    bp.dir  = p->dir;
> > +    switch ( p->size )
> > +    {
> > +    case 1:
> > +        bp.size = 0;
> > +        break;
> > +    case 2:
> > +        bp.size = 1;
> > +        break;
> > +    case 4:
> > +        bp.size = 2;
> > +        break;
> > +    case 8:
> > +        bp.size = 3;
> > +        qw = 1;
> > +        break;
> > +    default:
> > +        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p-
> >size);
> > +        return 0;
> > +    }
> > +
> > +    bp.data = p->data;
> > +    bp.addr = p->addr;
> > +
> > +    spin_lock(&iorp->lock);
> > +
> > +    if ( (pg->write_pointer - pg->read_pointer) >=
> > +         (IOREQ_BUFFER_SLOT_NUM - qw) )
> > +    {
> > +        /* The queue is full: send the iopacket through the normal path. */
> > +        spin_unlock(&iorp->lock);
> > +        return 0;
> > +    }
> > +
> > +    memcpy(&pg->buf_ioreq[pg->write_pointer %
> IOREQ_BUFFER_SLOT_NUM],
> > +           &bp, sizeof(bp));
> 
> Better be type safe using an assignment here?
> 

I didn't write this code, but I'll fix it up since I'm moving it.

> > +
> 
> Line of only spaces.
> 

Again, I'll fix this.

> > +    if ( qw )
> > +    {
> > +        bp.data = p->data >> 32;
> > +        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) %
> IOREQ_BUFFER_SLOT_NUM],
> > +               &bp, sizeof(bp));
> > +    }
> > +
> > +    /* Make the ioreq_t visible /before/ write_pointer. */
> > +    wmb();
> > +    pg->write_pointer += qw ? 2 : 1;
> > +
> > +    notify_via_xen_event_channel(v->domain,
> > +            v->domain-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> > +    spin_unlock(&iorp->lock);
> 
> Perhaps worth caching v->domain into a local variable?
> 

Ok.

> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> 
> const ioreq_t *
> 

Yep.

Thanks,

  Paul

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.