[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] xenbus: Support HVM backends



On 12/19/2011 01:16 PM, Konrad Rzeszutek Wilk wrote:
On Mon, Dec 19, 2011 at 12:51:15PM -0500, Daniel De Graaf wrote:
On 12/16/2011 02:56 PM, Konrad Rzeszutek Wilk wrote:
On Wed, Dec 14, 2011 at 03:12:09PM -0500, Daniel De Graaf wrote:
Add HVM implementations of xenbus_(map,unmap)_ring_v(alloc,free) so
that ring mappings can be done without using GNTMAP_contains_pte which
is not supported on HVM.

Signed-off-by: Daniel De Graaf<dgdegra@xxxxxxxxxxxxx>
---
  drivers/xen/xenbus/xenbus_client.c |  155 +++++++++++++++++++++++++++++-------
  1 files changed, 125 insertions(+), 30 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c 
b/drivers/xen/xenbus/xenbus_client.c
index 1906125..ecd695d 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -32,16 +32,27 @@

  #include<linux/slab.h>
  #include<linux/types.h>
+#include<linux/spinlock.h>
  #include<linux/vmalloc.h>
  #include<linux/export.h>
  #include<asm/xen/hypervisor.h>
  #include<asm/xen/page.h>
  #include<xen/interface/xen.h>
  #include<xen/interface/event_channel.h>
+#include<xen/balloon.h>
  #include<xen/events.h>
  #include<xen/grant_table.h>
  #include<xen/xenbus.h>

+struct xenbus_map_node {
+       struct list_head next;
+       struct page *page;
+       grant_handle_t handle;
+};
+
+static DEFINE_SPINLOCK(xenbus_valloc_lock);
+static LIST_HEAD(xenbus_valloc_pages);

Could we use this for what the PV version of xenbus_unmap_ring_vfree
does? (where it uses the vmlist_lock to look for the appropiate vaddr).

Could the vmlist_lock be removed and instead we can use this structure
for both PV and HVM?

Yes, the next version will do this.

[...]
+
+/**
+ * xenbus_map_ring_valloc
+ * @dev: xenbus device
+ * @gnt_ref: grant reference
+ * @vaddr: pointer to address to be filled out by mapping
+ *
+ * Based on Rusty Russell's skeleton driver's map_page.
+ * Map a page of memory into this domain from another domain's grant table.
+ * xenbus_map_ring_valloc allocates a page of virtual address space, maps the
+ * page to that address, and sets *vaddr to that address.
+ * Returns 0 on success, and GNTST_* (see xen/include/interface/grant_table.h)
+ * or -ENOMEM on error. If an error is returned, device will switch to
+ * XenbusStateClosing and the error message will be saved in XenStore.
+ */
+int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void 
**vaddr)
+{
+       if (xen_pv_domain())
+               return xenbus_map_ring_valloc_pv(dev, gnt_ref, vaddr);
+       else
+               return xenbus_map_ring_valloc_hvm(dev, gnt_ref, vaddr);

This could be done in a similar way which Annie's granttable v2 patches are 
done.

Meaning define something like:

struct xenbus_ring_ops {
        int (*map)(struct xenbus_device *dev, int gnt, void *vaddr);
        ...

And then define two variants of it (PV and HVM):

struct xenbus_ring_ops pv_ring_ops = {
        .map = xenbus_map_ring_valloc_pv;
        ..
}

have a static to which either one will be assigned:

static struct xenbus_ring_ops *ring_ops;

And in the init function:

void __init xenbus_ring_ops_init(void)
{
        if (xen_hvm_domain)
                ring_ops = hvm_ring_ops;
        else
        ...

And then xenbus_init() calls the xenbus_rings_ops_init().

Then these 'xenbus_map_ring_valloc' end up just using the
ring_ops->map call.

Is the reason for doing this just to avoid overhead? The overhead from
an indirect function call is much higher than from an integer comparison
(which is what xen_pv_domain resolves to). In this case, the _pv and _hvm
functions are both inlined into the dispatch function.

Do we care about that? How often are these calls made?

Not all that often - domain creation and destruction or device plug/unplug.
So performance doesn't really matter. Is there a reason to prefer an _ops
structure for this instead of direct calls?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.