[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v4 1/8] public / x86: Introduce __HYPERCALL_dm_op...



...as a set of hypercalls to be used by a device model.

As stated in the new docs/designs/dm_op.markdown:

"The aim of DMOP is to prevent a compromised device model from
compromising domains other then the one it is associated with. (And is
therefore likely already compromised)."

See that file for further information.

This patch simply adds the boilerplate for the hypercall.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
Suggested-by: Ian Jackson <ian.jackson@xxxxxxxxxx>
Suggested-by: Jennifer Herbert <jennifer.herbert@xxxxxxxxxx>
---
Cc: Ian Jackson <ian.jackson@xxxxxxxxxx>
Cc: Jennifer Herbert <jennifer.herbert@xxxxxxxxxx>
Cc: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

v4:
- Change XEN_GUEST_HANDLE_64 to XEN_GUEST_HANDLE in struct xen_dm_op_buf
  and add the necessary compat code. Drop Jan's R-b since the patch has
  been fundamentally modified.

v3:
- Re-written large portions of dmop.markdown to remove references to
  previous proposals and make it a standalone design doc.

v2:
- Addressed several comments from Jan.
- Removed modification of __XEN_LATEST_INTERFACE_VERSION__ as it is not
  needed in this patch.
---
 docs/designs/dmop.markdown        | 165 ++++++++++++++++++++++++++++++++++++++
 tools/flask/policy/modules/xen.if |   2 +-
 tools/libxc/include/xenctrl.h     |   1 +
 tools/libxc/xc_private.c          |  70 ++++++++++++++++
 tools/libxc/xc_private.h          |   2 +
 xen/arch/x86/hvm/Makefile         |   1 +
 xen/arch/x86/hvm/dm.c             | 149 ++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c            |   1 +
 xen/arch/x86/hypercall.c          |   2 +
 xen/include/Makefile              |   1 +
 xen/include/public/hvm/dm_op.h    |  71 ++++++++++++++++
 xen/include/public/xen.h          |   1 +
 xen/include/xen/hypercall.h       |  15 ++++
 xen/include/xlat.lst              |   1 +
 xen/include/xsm/dummy.h           |   6 ++
 xen/include/xsm/xsm.h             |   6 ++
 xen/xsm/flask/hooks.c             |   7 ++
 17 files changed, 500 insertions(+), 1 deletion(-)
 create mode 100644 docs/designs/dmop.markdown
 create mode 100644 xen/arch/x86/hvm/dm.c
 create mode 100644 xen/include/public/hvm/dm_op.h

diff --git a/docs/designs/dmop.markdown b/docs/designs/dmop.markdown
new file mode 100644
index 0000000..9f2f0d4
--- /dev/null
+++ b/docs/designs/dmop.markdown
@@ -0,0 +1,165 @@
+DMOP
+====
+
+Introduction
+------------
+
+The aim of DMOP is to prevent a compromised device model from compromising
+domains other then the one it is associated with. (And is therefore likely
+already compromised).
+
+The problem occurs when you a device model issues an hypercall that
+includes references to user memory other than the operation structure
+itself, such as with Track dirty VRAM (as used in VGA emulation).
+Is this case, the address of this other user memory needs to be vetted,
+to ensure it is not within restricted address ranges, such as kernel
+memory. The real problem comes down to how you would vet this address -
+the idea place to do this is within the privcmd driver, without privcmd
+having to have specific knowledge of the hypercall's semantics.
+
+The Design
+----------
+
+The privcmd driver implements a new restriction ioctl, which takes a domid
+parameter.  After that restriction ioctl is issued, the privcmd driver will
+permit only DMOP hypercalls, and only with the specified target domid.
+
+A DMOP hypercall consists of an array of buffers and lengths, with the
+first one containing the specific DMOP parameters. These can then reference
+further buffers from within in the array. Since the only user buffers
+passed are that found with that array, they can all can be audited by
+privcmd.
+
+The following code illustrates this idea:
+
+struct xen_dm_op {
+    uint32_t op;
+};
+
+struct xen_dm_op_buf {
+    XEN_GUEST_HANDLE(void) h;
+    unsigned long size;
+};
+typedef struct xen_dm_op_buf xen_dm_op_buf_t;
+
+enum neg_errnoval
+HYPERVISOR_dm_op(domid_t domid,
+                 xen_dm_op_buf_t bufs[],
+                 unsigned int nr_bufs)
+
+@domid is the domain the hypercall operates on.
+@bufs points to an array of buffers where @bufs[0] contains a struct
+dm_op, describing the specific device model operation and its parameters.
+@bufs[1..] may be referenced in the parameters for the purposes of
+passing extra information to or from the domain.
+@nr_bufs is the number of buffers in the @bufs array.
+
+It is forbidden for the above struct (xen_dm_op) to contain any guest
+handles. If they are needed, they should instead be in
+HYPERVISOR_dm_op->bufs.
+
+Validation by privcmd driver
+----------------------------
+
+If the privcmd driver has been restricted to specific domain (using a
+ new ioctl), when it received an op, it will:
+
+1. Check hypercall is DMOP.
+
+2. Check domid == restricted domid.
+
+3. For each @nr_bufs in @bufs: Check @h and @size give a buffer
+   wholly in the user space part of the virtual address space. (e.g.
+   Linux will use access_ok()).
+
+
+Xen Implementation
+------------------
+
+Since a DMOP buffers need to be copied from or to the guest, functions for
+doing this would be written as below.  Note that care is taken to prevent
+damage from buffer under- or over-run situations.  If the DMOP is called
+with incorrectly sized buffers, zeros will be read, while extra is ignored.
+
+static bool copy_buf_from_guest(xen_dm_op_buf_t bufs[],
+                                unsigned int nr_bufs, void *dst,
+                                unsigned int idx, size_t dst_size)
+{
+    size_t size = min_t(size_t, dst_size, bufs[idx].size);
+
+    return !copy_from_guest(dst, bufs[idx].h, size);
+}
+
+static bool copy_buf_to_guest(xen_dm_op_buf_t bufs[],
+                              unsigned int nr_bufs, unsigned int idx,
+                              void *src, size_t src_size)
+{
+    size_t size = min_t(size_t, bufs[idx].size, src_size);
+
+    return !copy_to_guest(bufs[idx].h, src, size);
+}
+
+This leaves do_dm_op easy to implement as below:
+
+static int dm_op(domid_t domid,
+                 unsigned int nr_bufs,
+                 xen_dm_op_buf_t bufs[])
+{
+    struct domain *d;
+    struct xen_dm_op op;
+    long rc;
+
+    rc = rcu_lock_remote_domain_by_id(domid, &d);
+    if ( rc )
+        return rc;
+
+    if ( !has_hvm_container_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    if ( !copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op)) )
+    {
+        rc = -EFAULT;
+        goto out;
+    }
+
+    switch ( op.op )
+    {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    if ( !rc &&
+         !copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op)) )
+        rc = -EFAULT;
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
+{
+    struct xen_dm_op_buf *nat;
+    int rc;
+
+    nat = xzalloc_array(struct xen_dm_op_buf, nr_bufs);
+    if ( !nat )
+        return -ENOMEM;
+
+    if ( !copy_from_guest_offset(nat, bufs, 0, nr_bufs) )
+        return -EFAULT;
+
+    rc = dm_op(domid, nr_bufs, nat);
+
+    xfree(nat);
+
+    return rc;
+}
diff --git a/tools/flask/policy/modules/xen.if 
b/tools/flask/policy/modules/xen.if
index 1aca75d..f9254c2 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -151,7 +151,7 @@ define(`device_model', `
 
        allow $1 $2_target:domain { getdomaininfo shutdown };
        allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack 
};
-       allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl 
irqlevel pciroute pcilevel cacheattr send_irq };
+       allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl 
irqlevel pciroute pcilevel cacheattr send_irq dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 4ab0f57..2ba46d7 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -41,6 +41,7 @@
 #include <xen/sched.h>
 #include <xen/memory.h>
 #include <xen/grant_table.h>
+#include <xen/hvm/dm_op.h>
 #include <xen/hvm/params.h>
 #include <xen/xsm/flask_op.h>
 #include <xen/tmem.h>
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index d57c39a..8e49635 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -776,6 +776,76 @@ int xc_ffs64(uint64_t x)
     return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0;
 }
 
+int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...)
+{
+    int ret = -1;
+    struct  {
+        void *u;
+        void *h;
+    } *bounce;
+    DECLARE_HYPERCALL_BUFFER(xen_dm_op_buf_t, bufs);
+    va_list args;
+    unsigned int idx;
+
+    bounce = calloc(nr_bufs, sizeof(*bounce));
+    if ( bounce == NULL )
+        goto fail1;
+
+    bufs = xc_hypercall_buffer_alloc(xch, bufs, sizeof(*bufs) * nr_bufs);
+    if ( bufs == NULL )
+        goto fail2;
+
+    va_start(args, nr_bufs);
+    for (idx = 0; idx < nr_bufs; idx++)
+    {
+        void *u = va_arg(args, void *);
+        size_t size = va_arg(args, size_t);
+
+        bounce[idx].h = xencall_alloc_buffer(xch->xcall, size);
+        if ( bounce[idx].h == NULL )
+            goto fail3;
+
+        memcpy(bounce[idx].h, u, size);
+        bounce[idx].u = u;
+
+        set_xen_guest_handle_raw(bufs[idx].h, bounce[idx].h);
+        bufs[idx].size = size;
+    }
+    va_end(args);
+
+    ret = xencall3(xch->xcall, __HYPERVISOR_dm_op,
+                   domid, nr_bufs, HYPERCALL_BUFFER_AS_ARG(bufs));
+    if ( ret < 0 )
+        goto fail4;
+
+    while ( idx-- != 0 )
+    {
+        memcpy(bounce[idx].u, bounce[idx].h, bufs[idx].size);
+        xencall_free_buffer(xch->xcall, bounce[idx].h);
+    }
+
+    xc_hypercall_buffer_free(xch, bufs);
+
+    free(bounce);
+
+    return 0;
+
+ fail4:
+    idx = nr_bufs;
+
+ fail3:
+    while ( idx-- != 0 )
+        xencall_free_buffer(xch->xcall, bounce[idx].h);
+
+    xc_hypercall_buffer_free(xch, bufs);
+
+ fail2:
+    free(bounce);
+
+ fail1:
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 97445ae..f191320 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -422,6 +422,8 @@ int xc_vm_event_control(xc_interface *xch, domid_t 
domain_id, unsigned int op,
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
                          uint32_t *port);
 
+int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...);
+
 #endif /* __XC_PRIVATE_H__ */
 
 /*
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index f750d13..5869d1b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -2,6 +2,7 @@ subdir-y += svm
 subdir-y += vmx
 
 obj-y += asid.o
+obj-y += dm.o
 obj-y += emulate.o
 obj-y += hpet.o
 obj-y += hvm.o
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
new file mode 100644
index 0000000..f00bc2c
--- /dev/null
+++ b/xen/arch/x86/hvm/dm.c
@@ -0,0 +1,149 @@
+/*
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/sched.h>
+
+#include <asm/hvm/ioreq.h>
+
+#include <xsm/xsm.h>
+
+static bool copy_buf_from_guest(xen_dm_op_buf_t bufs[],
+                                unsigned int nr_bufs, void *dst,
+                                unsigned int idx, size_t dst_size)
+{
+    size_t size = min_t(size_t, dst_size, bufs[idx].size);
+
+    return !copy_from_guest(dst, bufs[idx].h, size);
+}
+
+static bool copy_buf_to_guest(xen_dm_op_buf_t bufs[],
+                              unsigned int nr_bufs, unsigned int idx,
+                              void *src, size_t src_size)
+{
+    size_t size = min_t(size_t, bufs[idx].size, src_size);
+
+    return !copy_to_guest(bufs[idx].h, src, size);
+}
+
+static int dm_op(domid_t domid,
+                 unsigned int nr_bufs,
+                 xen_dm_op_buf_t bufs[])
+{
+    struct domain *d;
+    struct xen_dm_op op;
+    long rc;
+
+    rc = rcu_lock_remote_domain_by_id(domid, &d);
+    if ( rc )
+        return rc;
+
+    if ( !has_hvm_container_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    if ( !copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op)) )
+    {
+        rc = -EFAULT;
+        goto out;
+    }
+
+    switch ( op.op )
+    {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    if ( !rc &&
+         !copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op)) )
+        rc = -EFAULT;
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+int compat_dm_op(domid_t domid,
+                 unsigned int nr_bufs,
+                 COMPAT_HANDLE_PARAM(compat_dm_op_buf_t) bufs)
+{
+    struct xen_dm_op_buf *nat;
+    unsigned int i;
+    int rc = -EFAULT;
+
+    nat = xzalloc_array(struct xen_dm_op_buf, nr_bufs);
+    if ( !nat )
+        return -ENOMEM;
+
+    for ( i = 0; i < nr_bufs; i++ )
+    {
+        struct compat_dm_op_buf cmp;
+
+        if ( copy_from_compat_offset(&cmp, bufs, i, 1) )
+            goto out;
+
+#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \
+        guest_from_compat_handle((_d_)->h, (_s_)->h)
+
+        XLAT_dm_op_buf(&nat[i], &cmp);
+
+#undef XLAT_dm_op_buf_HNDL_h
+    }
+
+    rc = dm_op(domid, nr_bufs, nat);
+
+ out:
+    xfree(nat);
+
+    return rc;
+}
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
+{
+    struct xen_dm_op_buf *nat;
+    int rc;
+
+    nat = xzalloc_array(struct xen_dm_op_buf, nr_bufs);
+    if ( !nat )
+        return -ENOMEM;
+
+    if ( copy_from_guest_offset(nat, bufs, 0, nr_bufs) )
+        return -EFAULT;
+
+    rc = dm_op(domid, nr_bufs, nat);
+
+    xfree(nat);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 2ec0800..cb501e5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3852,6 +3852,7 @@ static const hypercall_table_t hvm_hypercall_table[] = {
     COMPAT_CALL(platform_op),
     COMPAT_CALL(mmuext_op),
     HYPERCALL(xenpmu_op),
+    COMPAT_CALL(dm_op),
     HYPERCALL(arch_1)
 };
 
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 70a30a5..8dd19de 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -66,6 +66,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
     ARGS(kexec_op, 2),
     ARGS(tmem_op, 1),
     ARGS(xenpmu_op, 2),
+    ARGS(dm_op, 3),
     ARGS(mca, 1),
     ARGS(arch_1, 1),
 };
@@ -128,6 +129,7 @@ static const hypercall_table_t pv_hypercall_table[] = {
     HYPERCALL(tmem_op),
 #endif
     HYPERCALL(xenpmu_op),
+    COMPAT_CALL(dm_op),
     HYPERCALL(mca),
     HYPERCALL(arch_1),
 };
diff --git a/xen/include/Makefile b/xen/include/Makefile
index 1e80a0b..aca7f20 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -27,6 +27,7 @@ headers-$(CONFIG_X86)     += compat/arch-x86/xen-mca.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen-$(compat-arch-y).h
 headers-$(CONFIG_X86)     += compat/hvm/hvm_vcpu.h
+headers-$(CONFIG_X86)     += compat/hvm/dm_op.h
 headers-y                 += compat/arch-$(compat-arch-y).h compat/pmu.h 
compat/xlat.h
 headers-$(CONFIG_FLASK)   += compat/xsm/flask_op.h
 
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
new file mode 100644
index 0000000..40e7a96
--- /dev/null
+++ b/xen/include/public/hvm/dm_op.h
@@ -0,0 +1,71 @@
+/*
+ * Copyright (c) 2016, Citrix Systems Inc
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __XEN_PUBLIC_HVM_DM_OP_H__
+#define __XEN_PUBLIC_HVM_DM_OP_H__
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+#include "../xen.h"
+
+#define XEN_DMOP_invalid 0
+
+struct xen_dm_op {
+    uint32_t op;
+};
+
+struct xen_dm_op_buf {
+    XEN_GUEST_HANDLE(void) h;
+    unsigned long size;
+};
+typedef struct xen_dm_op_buf xen_dm_op_buf_t;
+DEFINE_XEN_GUEST_HANDLE(xen_dm_op_buf_t);
+
+/* ` enum neg_errnoval
+ * ` HYPERVISOR_dm_op(domid_t domid,
+ * `                  xen_dm_op_buf_t bufs[],
+ * `                  unsigned int nr_bufs)
+ * `
+ *
+ * @domid is the domain the hypercall operates on.
+ * @bufs points to an array of buffers where @bufs[0] contains a struct
+ * xen_dm_op, describing the specific device model operation and its
+ * parameters.
+ * @bufs[1..] may be referenced in the parameters for the purposes of
+ * passing extra information to or from the domain.
+ * @nr_bufs is the number of buffers in the @bufs array.
+ */
+
+#endif /* __XEN__ || __XEN_TOOLS__ */
+
+#endif /* __XEN_PUBLIC_HVM_DM_OP_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 6593026..91ba8bb 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -120,6 +120,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
 #define __HYPERVISOR_xenpmu_op            40
+#define __HYPERVISOR_dm_op                41
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 207a0e8..8d4824f 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -15,6 +15,7 @@
 #include <public/tmem.h>
 #include <public/version.h>
 #include <public/pmu.h>
+#include <public/hvm/dm_op.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -141,6 +142,12 @@ do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 extern long
 do_xenpmu_op(unsigned int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
 
+extern long
+do_dm_op(
+    domid_t domid,
+    unsigned int nr_bufs,
+    XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
+
 #ifdef CONFIG_COMPAT
 
 extern int
@@ -190,6 +197,14 @@ extern int compat_multicall(
     XEN_GUEST_HANDLE_PARAM(multicall_entry_compat_t) call_list,
     uint32_t nr_calls);
 
+#include <compat/hvm/dm_op.h>
+
+extern int
+compat_dm_op(
+    domid_t domid,
+    unsigned int nr_bufs,
+    COMPAT_HANDLE_PARAM(compat_dm_op_buf_t) bufs);
+
 #endif
 
 void arch_get_xen_caps(xen_capabilities_info_t *info);
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index bdf1d05..0a2f84d 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -129,3 +129,4 @@
 ?      flask_setenforce                xsm/flask_op.h
 !      flask_sid_context               xsm/flask_op.h
 ?      flask_transition                xsm/flask_op.h
+!      dm_op_buf                       hvm/dm_op.h
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 95460af..b206f5a 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -727,6 +727,12 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct 
domain *d, unsigned int
     }
 }
 
+static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
 #endif /* CONFIG_X86 */
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 5dc59dd..e2d336f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -184,6 +184,7 @@ struct xsm_operations {
     int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, 
uint8_t allow);
     int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t 
allow);
     int (*pmu_op) (struct domain *d, unsigned int op);
+    int (*dm_op) (struct domain *d);
 #endif
     int (*xen_version) (uint32_t cmd);
 };
@@ -722,6 +723,11 @@ static inline int xsm_pmu_op (xsm_default_t def, struct 
domain *d, unsigned int
     return xsm_ops->pmu_op(d, op);
 }
 
+static inline int xsm_dm_op(xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->dm_op(d);
+}
+
 #endif /* CONFIG_X86 */
 
 static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 040a251..383dd4f 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1635,6 +1635,12 @@ static int flask_pmu_op (struct domain *d, unsigned int 
op)
         return -EPERM;
     }
 }
+
+static int flask_dm_op(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__DM);
+}
+
 #endif /* CONFIG_X86 */
 
 static int flask_xen_version (uint32_t op)
@@ -1814,6 +1820,7 @@ static struct xsm_operations flask_ops = {
     .ioport_permission = flask_ioport_permission,
     .ioport_mapping = flask_ioport_mapping,
     .pmu_op = flask_pmu_op,
+    .dm_op = flask_dm_op,
 #endif
     .xen_version = flask_xen_version,
 };
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.