[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 5/6] xen-gntalloc: Userspace grant allocation driver
On 02/08/2011 05:48 PM, Konrad Rzeszutek Wilk wrote: > On Thu, Feb 03, 2011 at 12:19:03PM -0500, Daniel De Graaf wrote: >> This allows a userspace application to allocate a shared page for >> implementing inter-domain communication or device drivers. These >> shared pages can be mapped using the gntdev device or by the kernel >> in another domain. >> >> Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> >> --- >> drivers/xen/Kconfig | 8 + >> drivers/xen/Makefile | 2 + >> drivers/xen/gntalloc.c | 486 >> ++++++++++++++++++++++++++++++++++++++++++++++++ >> include/xen/gntalloc.h | 50 +++++ >> 4 files changed, 546 insertions(+), 0 deletions(-) >> create mode 100644 drivers/xen/gntalloc.c >> create mode 100644 include/xen/gntalloc.h >> [snip] >> +static int add_grefs(struct ioctl_gntalloc_alloc_gref *op, >> + uint32_t *gref_ids, struct gntalloc_file_private_data *priv) >> +{ >> + int i, rc, readonly; >> + LIST_HEAD(queue_gref); >> + LIST_HEAD(queue_file); >> + struct gntalloc_gref *gref; >> + >> + readonly = !(op->flags & GNTALLOC_FLAG_WRITABLE); >> + rc = -ENOMEM; >> + for (i = 0; i < op->count; i++) { >> + gref = kzalloc(sizeof(*gref), GFP_KERNEL); >> + if (!gref) >> + goto undo; >> + list_add_tail(&gref->next_gref, &queue_gref); >> + list_add_tail(&gref->next_file, &queue_file); >> + gref->users = 1; >> + gref->file_index = op->index + i * PAGE_SIZE; >> + gref->page = alloc_page(GFP_KERNEL|__GFP_ZERO); >> + if (!gref->page) >> + goto undo; >> + >> + /* Grant foreign access to the page. */ >> + gref->gref_id = gnttab_grant_foreign_access(op->domid, >> + pfn_to_mfn(page_to_pfn(gref->page)), readonly); >> + if (gref->gref_id < 0) { >> + rc = gref->gref_id; >> + goto undo; >> + } >> + gref_ids[i] = gref->gref_id; >> + } >> + >> + /* Add to gref lists. */ >> + spin_lock(&gref_lock); >> + list_splice_tail(&queue_gref, &gref_list); >> + list_splice_tail(&queue_file, &priv->list); >> + spin_unlock(&gref_lock); >> + >> + return 0; >> + >> +undo: >> + spin_lock(&gref_lock); >> + gref_size -= (op->count - i); > > So we decrease the gref_size by the count of the ones that we > allocated.. No, (op->count - i) is the number that we haven't yet allocated. Those aren't in queue_file, so __del_gref is never called on them. >> + >> + list_for_each_entry(gref, &queue_file, next_file) { >> + /* __del_gref does not remove from queue_file */ >> + __del_gref(gref); > > .. but __del_gref decreases the gref_size by one, so wouldn't > we decrease by too much? [snip] >> +static long gntalloc_ioctl_alloc(struct gntalloc_file_private_data *priv, >> + struct ioctl_gntalloc_alloc_gref __user *arg) >> +{ >> + int rc = 0; >> + struct ioctl_gntalloc_alloc_gref op; >> + uint32_t *gref_ids; >> + >> + pr_debug("%s: priv %p\n", __func__, priv); >> + >> + if (copy_from_user(&op, arg, sizeof(op))) { >> + rc = -EFAULT; >> + goto out; >> + } >> + >> + gref_ids = kzalloc(sizeof(gref_ids[0]) * op.count, GFP_TEMPORARY); >> + if (!gref_ids) { >> + rc = -ENOMEM; >> + goto out; >> + } >> + >> + spin_lock(&gref_lock); >> + /* Clean up pages that were at zero (local) users but were still mapped >> + * by remote domains. Since those pages count towards the limit that we >> + * are about to enforce, removing them here is a good idea. >> + */ >> + do_cleanup(); >> + if (gref_size + op.count > limit) { >> + spin_unlock(&gref_lock); >> + rc = -ENOSPC; >> + goto out_free; >> + } >> + gref_size += op.count; >> + op.index = priv->index; >> + priv->index += op.count * PAGE_SIZE; >> + spin_unlock(&gref_lock); >> + >> + rc = add_grefs(&op, gref_ids, priv); >> + if (rc < 0) >> + goto out_free; > > Should we cleanup up priv->index to its earlier value? We could, but this could also introduce a possible race if two threads were running the ioctl at the same time. There's no harm in letting the index increase, since it is 64-bit so doesn't have overflow issues. -- Daniel De Graaf National Security Agency _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |