|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v8 1/3] ioreq-server: add support for multiple servers
>>> On 19.05.14 at 15:47, <paul.durrant@xxxxxxxxxx> wrote:
> +static int hvm_access_cf8(
> + int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> + struct domain *d = current->domain;
> +
> + if ( dir == IOREQ_WRITE &&
> + bytes == 4 )
Don't you also need to check port == 0xcf8 here?
> +static ioservid_t next_ioservid(struct domain *d)
> +{
> + struct hvm_ioreq_server *s;
> + ioservid_t id;
> +
> + ASSERT(spin_is_locked(&d->arch.hvm_domain.ioreq_server.lock));
> +
> + id = d->arch.hvm_domain.ioreq_server.id;
> +
> + again:
> + id++;
> +
> + /* Check for uniqueness */
> + list_for_each_entry ( s,
> + &d->arch.hvm_domain.ioreq_server.list,
> + list_entry )
> + {
> + if (id == s->id)
The beloved coding style remark again.
> @@ -757,25 +937,34 @@ static int hvm_create_ioreq_server(struct domain *d,
> domid_t domid)
> goto fail1;
>
> domain_pause(d);
> - spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> + spin_lock(&d->arch.hvm_domain.ioreq_server.lock);
>
> rc = -EEXIST;
> - if ( d->arch.hvm_domain.ioreq_server != NULL )
> + if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL )
> goto fail2;
>
> - rc = hvm_ioreq_server_init(s, d, domid);
> + rc = hvm_ioreq_server_init(s, d, domid, is_default,
> + next_ioservid(d));
> if ( rc )
> - goto fail2;
> + goto fail3;
> +
> + list_add(&s->list_entry,
> + &d->arch.hvm_domain.ioreq_server.list);
>
> - d->arch.hvm_domain.ioreq_server = s;
> + if ( is_default )
> + d->arch.hvm_domain.default_ioreq_server = s;
>
> - spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> + if (id != NULL)
And again.
> +static struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
> + ioreq_t *p)
> +{
> +#define CF8_BDF(cf8) (((cf8) & 0x00ffff00) >> 8)
> +#define CF8_ADDR_LO(cf8) ((cf8) & 0x000000fc)
> +#define CF8_ADDR_HI(cf8) (((cf8) & 0x0f000000) >> 16)
> +#define CF8_ENABLED(cf8) (!!((cf8) & 0x80000000))
> +
> + struct hvm_ioreq_server *s;
> + uint32_t cf8;
> + uint8_t type;
> + uint64_t addr;
> +
> + if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
> + return NULL;
> +
> + if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) ||
> + (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) )
> + return d->arch.hvm_domain.default_ioreq_server;
> +
> + cf8 = d->arch.hvm_domain.pci_cf8;
> +
> + /*
> + * NOTE: Only types IOREQ_TYPE_PIO and IOREQ_TYPE_COPY are valid
> + * on entry.
> + */
I realize this got added in response to my comments on the previous
version, but just now I realized that you had already then added a
respective check in the second if() above (which I missed). With that
the comment seems rather useless.
> +
Line with only blanks.
> struct hvm_domain {
> - spinlock_t ioreq_server_lock;
> - struct hvm_ioreq_server *ioreq_server;
> + /* Guest page range used for non-default ioreq servers */
> + struct {
> + unsigned long base;
> + unsigned long mask;
> + unsigned int count;
Do you really need count and mask here? I suppose that partly
depends on whether HVM_PARAM_IOREQ_SERVER_PFN can be
changed more than once for a domain. If it can't/shouldn't, I
suppose just mask would suffice (by simply setting all unavailable
bits). If it can, the question arises what the right action is upon
a request to lower the count below the number of currently
registered servers.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |