[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 04/17] x86/hvm: unify dpci portio intercept with standard portio intercept
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 17 June 2015 15:37 > To: Paul Durrant > Cc: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org) > Subject: Re: [PATCH v2 04/17] x86/hvm: unify dpci portio intercept with > standard portio intercept > > >>> On 11.06.15 at 17:42, <paul.durrant@xxxxxxxxxx> wrote: > > --- a/xen/arch/x86/hvm/io.c > > +++ b/xen/arch/x86/hvm/io.c > > @@ -208,185 +208,113 @@ void hvm_io_assist(ioreq_t *p) > > } > > } > > > > -static int dpci_ioport_read(uint32_t mport, ioreq_t *p) > > +static bool_t dpci_portio_accept(struct hvm_io_handler *io_handler, > > + struct vcpu *v, > > + uint64_t addr, > > + uint32_t size) > > { > > - struct hvm_vcpu_io *vio = ¤t->arch.hvm_vcpu.hvm_io; > > - int rc = X86EMUL_OKAY, i, step = p->df ? -p->size : p->size; > > - uint32_t data = 0; > > - > > - for ( i = 0; i < p->count; i++ ) > > - { > > - if ( vio->mmio_retrying ) > > - { > > - if ( vio->mmio_large_read_bytes != p->size ) > > - return X86EMUL_UNHANDLEABLE; > > - memcpy(&data, vio->mmio_large_read, p->size); > > - vio->mmio_large_read_bytes = 0; > > - vio->mmio_retrying = 0; > > - } > > - else switch ( p->size ) > > - { > > - case 1: > > - data = inb(mport); > > - break; > > - case 2: > > - data = inw(mport); > > - break; > > - case 4: > > - data = inl(mport); > > - break; > > - default: > > - BUG(); > > - } > > + struct hvm_iommu *hd = domain_hvm_iommu(v->domain); > > + struct hvm_vcpu_io *vio = &v->arch.hvm_vcpu.hvm_io; > > + struct g2m_ioport *g2m_ioport; > > + uint32_t start, end; > > + uint32_t gport = addr, mport; > > > > - if ( p->data_is_ptr ) > > - { > > - switch ( hvm_copy_to_guest_phys(p->data + step * i, > > - &data, p->size) ) > > - { > > - case HVMCOPY_okay: > > - break; > > - case HVMCOPY_gfn_paged_out: > > - case HVMCOPY_gfn_shared: > > - rc = X86EMUL_RETRY; > > - break; > > - case HVMCOPY_bad_gfn_to_mfn: > > - /* Drop the write as real hardware would. */ > > - continue; > > - case HVMCOPY_bad_gva_to_gfn: > > - ASSERT(0); > > - /* fall through */ > > - default: > > - rc = X86EMUL_UNHANDLEABLE; > > - break; > > - } > > - if ( rc != X86EMUL_OKAY) > > - break; > > - } > > - else > > - p->data = data; > > - } > > > > - if ( rc == X86EMUL_RETRY ) > > + list_for_each_entry( g2m_ioport, &hd->arch.g2m_ioport_list, list ) > > { > > - vio->mmio_retry = 1; > > - vio->mmio_large_read_bytes = p->size; > > - memcpy(vio->mmio_large_read, &data, p->size); > > + start = g2m_ioport->gport; > > + end = start + g2m_ioport->np; > > + if ( (gport >= start) && (gport < end) ) > > + goto found; > > } > > > > - if ( i != 0 ) > > + return 0; > > + > > + found: > > + mport = (gport - start) + g2m_ioport->mport; > > + > > + if ( !ioports_access_permitted(current->domain, mport, > > Isn't v == current here (which of course should be made obvious > again by naming it curr)? And then - is this check needed here at > all? I realize you just move it, but the permission check should be > (and is being) done in the XEN_DOMCTL_ioport_mapping handler. > Ok. I'll happily drop the check. > > +void register_dpci_portio_handler(struct domain *d) > > +{ > > + struct hvm_io_handler *handler = &d- > >arch.hvm_domain.dpci_handler; > > > > - switch ( p->dir ) > > - { > > - case IOREQ_READ: > > - rc = dpci_ioport_read(mport, p); > > - break; > > - case IOREQ_WRITE: > > - rc = dpci_ioport_write(mport, p); > > - break; > > - default: > > - gdprintk(XENLOG_ERR, "Error: couldn't handle p->dir = %d", p->dir); > > - rc = X86EMUL_UNHANDLEABLE; > > - } > > + handler->type = IOREQ_TYPE_PIO; > > + handler->ops = &dpci_portio_ops; > > > > - return rc; > > + hvm_register_io_handler(d, handler, 1); > > Again registering a pointer into struct domain, with invariant > initialization. > Agreed, but I don't see any other way of unifying this with the main process loop. Do you have an alternative suggestion? Paul > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |