[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH XEN v5 07/23] tools: Refactor /dev/xen/gnt{dev, shr} wrappers into libxengnttab.



On 16/11/15 07:30, Ian Campbell wrote:
On Fri, 2015-11-13 at 15:38 -0500, Daniel De Graaf wrote:
On 13/11/15 10:02, Ian Campbell wrote:
On Wed, 2015-11-11 at 15:03 +0000, Ian Jackson wrote:
Ian Campbell writes ("[PATCH XEN v5 07/23] tools: Refactor
/dev/xen/gnt{dev,shr} wrappers into libxengnttab."):
libxengnttab will provide a stable API and ABI for accessing the
grant table devices.
[...]
+/**
+ * Memory maps a grant reference from one domain to a local
address
range.
+ * Mappings should be unmapped with xengnttab_munmap. If
notify_offset
or
+ * notify_port are not -1, this version will attempt to set up an
unmap
+ * notification at the given offset and event channel. When the
page
is
+ * unmapped, the byte at the given offset will be zeroed and a
wakeup
will be
+ * sent to the given event channel.  Logs errors.

What happens if the unmap notification cannot be set up ?

Also "when the page is unmapped" makes it sound like you mean
xengnttab_munmap but actually I think this is when the grant is
withdrawn by the grantor ?

If the grant is withdrawn by the grantor, does the page become
unuseable immediately ?  If so, how can anyone ever use this safely ?

Daniel, could you answer these ones please.

This is intended to allow the kernel to send a close-request notification
when the application that allocated the grant page exits without calling
a proper shutdown (i.e. it crashes, calls _exit, calls execve, etc).

That is the kernel of the grantor or grantee process? It sounds like
grantor tells grantee (who would then be expected to unmap?)

The kernel providing gntalloc is the one that sends the notification, since
the other domain (gntdev) must take action to unmap the page.

Who actually does the unmap, the grantee process or their kernel? I suppose
the process, which is expected to be watching for the notification and is
required to do the unmap itself. IOW the "munmap notification" is a request
to please munmap, not a notification that something has been unmapped out
from beneath the calling process.

Correct.  The remote process actually does the unmap; there is no back-channel
available to notify the kernel of the other side directly (and setting one up
would usually be a waste of resources).

What happens if the unmap notification cannot be set up? Does the call fail
(and unmap what it has done) or does it succeed?

The only reason for the setup to fail is if an invalid event channel or offset
is specified.  It looks like this causes a new mapping to fail (it is freed).

I think the answer to the other two questions depend on the clarifications
above, but I think it is the case that nothing is unmapped automatically,
all that this does is give you a result from evtchn_poll etc with the
expectation that the caller will call xengnttab_munmap in a controlled way
by themselves.

Without this signal, the kernel has no way to request that the mapper of
the page release it, and since Xen has no grant revocation mechanism, the
page will likely be tied up until the process on the other side is told to
release the page through some other method.

+/*
+ * Creates and shares pages with another domain.
+ *
...
+void *xengntshr_share_pages(xengntshr_handle *xgs, uint32_t domid,
+                            int count, uint32_t *refs, int
writable);

Can this fail ?  Can it partially succeed ?

Daniel?

It can fail if you are out of pages to grant (there is a module parameter
that can be adjusted via sysfs for the maximum), or in the unlikely case
that the kernel itself is out of room in its grant mapping table (or if
the syscall itself encounters -ENOMEM).

Does it either completely succeed or undo partial work, or does it return
partial success somehow?

The limit is checked (and the current count updated) prior to making any
changes.


+/*
+ * Creates and shares a page with another domain, with unmap
notification.
+ *
+ * @parm xgs a handle to an open grant sharing instance
+ * @parm domid the domain to share memory with
+ * @parm refs the grant reference of the pages (output)
+ * @parm writable true if the other domain can write to the page
+ * @parm notify_offset The byte offset in the page to use for
unmap
+ *                     notification; -1 for none.
+ * @parm notify_port The event channel port to use for unmap
notify,
or -1
+ * @return local mapping of the page
+ */
+void *xengntshr_share_page_notify(xengntshr_handle *xgs, uint32_t
domid,

What is this `unmap notification' ?

Daniel?

As mentioned above, it is a way for the kernel to request for the other
side to unmap a page.  It is probably easiest to understand if looking
at the libvchan driver: one byte of the page is a "status" byte, and
when the side using xengntshr exits or dies, this byte is set to zero
and the event channel is notified.  When the peer is woken by the notify
and sees the state byte set to zero, it removes its own mappings so that
the shared pages can be freed.

This is the sending end of the notification requested by the caller of
xengnttab_map_grant_ref_notify I think?

Yes.

+/*
+ * Unmaps the @count pages starting at @start_address, which were
mapped by a
+ * call to xengntshr_share_*. Never logs.

Linewrap in the comment.

+ */
+int xengntshr_munmap(xengntshr_handle *xgs, void *start_address,
uint32_t count);

What effect does this have on the peer ?

Daniel?

If this removes the (final copy of the) mapping and a notify offset/port
is set, that processing happens.  Otherwise, the peer cannot tell when
this is called.

So this will always success (I think?) but the underlying page cannot be
freed until the other end unmaps it (whether because of the notification of
some other reason).

Yes.

What is the status of the memory at start_address between the call to
xengntshr_munmap and the otherend doing xengnttab_munmap?

A reference to the page is held in gntalloc's gref_list; the page itself is
not usable by that domain until it can verify (gnttab_query_foreign_access)
that the page is not mapped elsewhere.  At that point, the page is released
back to the kernel MM.

What is the status of the mapping made by the peer via
xengnttab_map_grant_ref_notify in that same interval?

The gnttab_map_grant_ref_notify is an analogous notification for the gnttab
device, allowing the application using gntalloc to close the channel when
the application that is using it has gone away.  The notification in this
case is not needed to clean up the grant mapping (that has already been done
by the gntdev kernel code), but is useful if the sharing application wishes
to clean up (remove the shared pages or perhaps exit) when its peer exits.

Who is responsible for reclaiming the underlying memory, the kernel or the
process?

The kernel: gntalloc.c do_cleanup.

Thanks,
Ian.



--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.